2025-08-29 14:05:14.871722 | Job console starting 2025-08-29 14:05:14.884339 | Updating git repos 2025-08-29 14:05:14.948441 | Cloning repos into workspace 2025-08-29 14:05:15.220524 | Restoring repo states 2025-08-29 14:05:15.248611 | Merging changes 2025-08-29 14:05:15.248641 | Checking out repos 2025-08-29 14:05:15.489448 | Preparing playbooks 2025-08-29 14:05:16.201040 | Running Ansible setup 2025-08-29 14:05:20.398072 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-08-29 14:05:21.188538 | 2025-08-29 14:05:21.188721 | PLAY [Base pre] 2025-08-29 14:05:21.206408 | 2025-08-29 14:05:21.206574 | TASK [Setup log path fact] 2025-08-29 14:05:21.237767 | orchestrator | ok 2025-08-29 14:05:21.255808 | 2025-08-29 14:05:21.255973 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 14:05:21.295980 | orchestrator | ok 2025-08-29 14:05:21.307705 | 2025-08-29 14:05:21.307839 | TASK [emit-job-header : Print job information] 2025-08-29 14:05:21.367103 | # Job Information 2025-08-29 14:05:21.367298 | Ansible Version: 2.16.14 2025-08-29 14:05:21.367334 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-08-29 14:05:21.367367 | Pipeline: post 2025-08-29 14:05:21.367406 | Executor: 521e9411259a 2025-08-29 14:05:21.367427 | Triggered by: https://github.com/osism/testbed/commit/4170080bde3f8ebb424d0797e843b3d9d7dc2e22 2025-08-29 14:05:21.367449 | Event ID: 2bf5971c-84e1-11f0-922c-b0e8d7badef4 2025-08-29 14:05:21.378002 | 2025-08-29 14:05:21.378127 | LOOP [emit-job-header : Print node information] 2025-08-29 14:05:21.603793 | orchestrator | ok: 2025-08-29 14:05:21.603998 | orchestrator | # Node Information 2025-08-29 14:05:21.604033 | orchestrator | Inventory Hostname: orchestrator 2025-08-29 14:05:21.604059 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-08-29 14:05:21.604081 | orchestrator | Username: zuul-testbed02 2025-08-29 14:05:21.604102 | orchestrator | Distro: Debian 12.11 2025-08-29 14:05:21.604125 | orchestrator | Provider: static-testbed 2025-08-29 14:05:21.604147 | orchestrator | Region: 2025-08-29 14:05:21.604168 | orchestrator | Label: testbed-orchestrator 2025-08-29 14:05:21.604187 | orchestrator | Product Name: OpenStack Nova 2025-08-29 14:05:21.604206 | orchestrator | Interface IP: 81.163.193.140 2025-08-29 14:05:21.626171 | 2025-08-29 14:05:21.626317 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-08-29 14:05:22.115961 | orchestrator -> localhost | changed 2025-08-29 14:05:22.124264 | 2025-08-29 14:05:22.124401 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-08-29 14:05:23.302485 | orchestrator -> localhost | changed 2025-08-29 14:05:23.317318 | 2025-08-29 14:05:23.317479 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-08-29 14:05:23.712065 | orchestrator -> localhost | ok 2025-08-29 14:05:23.720218 | 2025-08-29 14:05:23.720345 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-08-29 14:05:23.755510 | orchestrator | ok 2025-08-29 14:05:23.780267 | orchestrator | included: /var/lib/zuul/builds/adc95f4e315d487b829a740e42876478/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-08-29 14:05:23.795465 | 2025-08-29 14:05:23.795605 | TASK [add-build-sshkey : Create Temp SSH key] 2025-08-29 14:05:26.479576 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-08-29 14:05:26.479818 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/adc95f4e315d487b829a740e42876478/work/adc95f4e315d487b829a740e42876478_id_rsa 2025-08-29 14:05:26.479857 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/adc95f4e315d487b829a740e42876478/work/adc95f4e315d487b829a740e42876478_id_rsa.pub 2025-08-29 14:05:26.479884 | orchestrator -> localhost | The key fingerprint is: 2025-08-29 14:05:26.479908 | orchestrator -> localhost | SHA256:cYqEGxcpFZIYOi6erhhcp8wOZJ1/kuI8WeVbFQkeh+8 zuul-build-sshkey 2025-08-29 14:05:26.479930 | orchestrator -> localhost | The key's randomart image is: 2025-08-29 14:05:26.479960 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-08-29 14:05:26.479983 | orchestrator -> localhost | | .o.o+o oo.. | 2025-08-29 14:05:26.480004 | orchestrator -> localhost | | .. oo....oo | 2025-08-29 14:05:26.480024 | orchestrator -> localhost | |o o.o ..o . | 2025-08-29 14:05:26.480045 | orchestrator -> localhost | |... .= o + o | 2025-08-29 14:05:26.480065 | orchestrator -> localhost | |.+ +..+ S o | 2025-08-29 14:05:26.480087 | orchestrator -> localhost | |* = +... . E | 2025-08-29 14:05:26.480107 | orchestrator -> localhost | |.= =o+ .o | 2025-08-29 14:05:26.480127 | orchestrator -> localhost | |o.=o. o. | 2025-08-29 14:05:26.480147 | orchestrator -> localhost | |+. +. | 2025-08-29 14:05:26.480168 | orchestrator -> localhost | +----[SHA256]-----+ 2025-08-29 14:05:26.480294 | orchestrator -> localhost | ok: Runtime: 0:00:01.840785 2025-08-29 14:05:26.488669 | 2025-08-29 14:05:26.488783 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-08-29 14:05:26.508183 | orchestrator | ok 2025-08-29 14:05:26.519133 | orchestrator | included: /var/lib/zuul/builds/adc95f4e315d487b829a740e42876478/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-08-29 14:05:26.529193 | 2025-08-29 14:05:26.529315 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-08-29 14:05:26.553643 | orchestrator | skipping: Conditional result was False 2025-08-29 14:05:26.570167 | 2025-08-29 14:05:26.570357 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-08-29 14:05:27.165468 | orchestrator | changed 2025-08-29 14:05:27.172057 | 2025-08-29 14:05:27.172167 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-08-29 14:05:27.440735 | orchestrator | ok 2025-08-29 14:05:27.450564 | 2025-08-29 14:05:27.450727 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-08-29 14:05:27.844458 | orchestrator | ok 2025-08-29 14:05:27.851203 | 2025-08-29 14:05:27.851431 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-08-29 14:05:28.247277 | orchestrator | ok 2025-08-29 14:05:28.253636 | 2025-08-29 14:05:28.253752 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-08-29 14:05:28.278046 | orchestrator | skipping: Conditional result was False 2025-08-29 14:05:28.285017 | 2025-08-29 14:05:28.285126 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-08-29 14:05:28.807989 | orchestrator -> localhost | changed 2025-08-29 14:05:28.823968 | 2025-08-29 14:05:28.824089 | TASK [add-build-sshkey : Add back temp key] 2025-08-29 14:05:29.233330 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/adc95f4e315d487b829a740e42876478/work/adc95f4e315d487b829a740e42876478_id_rsa (zuul-build-sshkey) 2025-08-29 14:05:29.233599 | orchestrator -> localhost | ok: Runtime: 0:00:00.021120 2025-08-29 14:05:29.241504 | 2025-08-29 14:05:29.241680 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-08-29 14:05:29.659498 | orchestrator | ok 2025-08-29 14:05:29.667648 | 2025-08-29 14:05:29.667777 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-08-29 14:05:29.701987 | orchestrator | skipping: Conditional result was False 2025-08-29 14:05:29.761649 | 2025-08-29 14:05:29.761787 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-08-29 14:05:30.145918 | orchestrator | ok 2025-08-29 14:05:30.161260 | 2025-08-29 14:05:30.161403 | TASK [validate-host : Define zuul_info_dir fact] 2025-08-29 14:05:30.203140 | orchestrator | ok 2025-08-29 14:05:30.212010 | 2025-08-29 14:05:30.212126 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-08-29 14:05:30.547869 | orchestrator -> localhost | ok 2025-08-29 14:05:30.560899 | 2025-08-29 14:05:30.561081 | TASK [validate-host : Collect information about the host] 2025-08-29 14:05:31.773666 | orchestrator | ok 2025-08-29 14:05:31.791850 | 2025-08-29 14:05:31.791980 | TASK [validate-host : Sanitize hostname] 2025-08-29 14:05:31.865637 | orchestrator | ok 2025-08-29 14:05:31.877472 | 2025-08-29 14:05:31.877632 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-08-29 14:05:32.448104 | orchestrator -> localhost | changed 2025-08-29 14:05:32.461327 | 2025-08-29 14:05:32.461540 | TASK [validate-host : Collect information about zuul worker] 2025-08-29 14:05:32.884232 | orchestrator | ok 2025-08-29 14:05:32.891969 | 2025-08-29 14:05:32.892103 | TASK [validate-host : Write out all zuul information for each host] 2025-08-29 14:05:33.524447 | orchestrator -> localhost | changed 2025-08-29 14:05:33.535813 | 2025-08-29 14:05:33.535931 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-08-29 14:05:33.815195 | orchestrator | ok 2025-08-29 14:05:33.825749 | 2025-08-29 14:05:33.825876 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-08-29 14:06:09.721160 | orchestrator | changed: 2025-08-29 14:06:09.721399 | orchestrator | .d..t...... src/ 2025-08-29 14:06:09.721458 | orchestrator | .d..t...... src/github.com/ 2025-08-29 14:06:09.721484 | orchestrator | .d..t...... src/github.com/osism/ 2025-08-29 14:06:09.721506 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-08-29 14:06:09.721526 | orchestrator | RedHat.yml 2025-08-29 14:06:09.744177 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-08-29 14:06:09.744194 | orchestrator | RedHat.yml 2025-08-29 14:06:09.744246 | orchestrator | = 2.2.0"... 2025-08-29 14:06:22.450756 | orchestrator | 14:06:22.450 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-08-29 14:06:22.477825 | orchestrator | 14:06:22.477 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-08-29 14:06:23.027953 | orchestrator | 14:06:23.027 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-08-29 14:06:23.672893 | orchestrator | 14:06:23.672 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 14:06:23.747272 | orchestrator | 14:06:23.747 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-08-29 14:06:24.207255 | orchestrator | 14:06:24.207 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 14:06:24.595315 | orchestrator | 14:06:24.595 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-08-29 14:06:25.474918 | orchestrator | 14:06:25.474 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-08-29 14:06:25.474978 | orchestrator | 14:06:25.474 STDOUT terraform: Providers are signed by their developers. 2025-08-29 14:06:25.475034 | orchestrator | 14:06:25.474 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-08-29 14:06:25.475102 | orchestrator | 14:06:25.475 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-08-29 14:06:25.475207 | orchestrator | 14:06:25.475 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-08-29 14:06:25.475419 | orchestrator | 14:06:25.475 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-08-29 14:06:25.475580 | orchestrator | 14:06:25.475 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-08-29 14:06:25.475620 | orchestrator | 14:06:25.475 STDOUT terraform: you run "tofu init" in the future. 2025-08-29 14:06:25.475694 | orchestrator | 14:06:25.475 STDOUT terraform: OpenTofu has been successfully initialized! 2025-08-29 14:06:25.475810 | orchestrator | 14:06:25.475 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-08-29 14:06:25.475907 | orchestrator | 14:06:25.475 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-08-29 14:06:25.475937 | orchestrator | 14:06:25.475 STDOUT terraform: should now work. 2025-08-29 14:06:25.476045 | orchestrator | 14:06:25.475 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-08-29 14:06:25.476153 | orchestrator | 14:06:25.476 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-08-29 14:06:25.476671 | orchestrator | 14:06:25.476 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-08-29 14:06:25.572627 | orchestrator | 14:06:25.572 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-08-29 14:06:25.572719 | orchestrator | 14:06:25.572 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-08-29 14:06:25.785699 | orchestrator | 14:06:25.785 STDOUT terraform: Created and switched to workspace "ci"! 2025-08-29 14:06:25.785767 | orchestrator | 14:06:25.785 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-08-29 14:06:25.785782 | orchestrator | 14:06:25.785 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-08-29 14:06:25.785791 | orchestrator | 14:06:25.785 STDOUT terraform: for this configuration. 2025-08-29 14:06:25.935441 | orchestrator | 14:06:25.935 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-08-29 14:06:25.935504 | orchestrator | 14:06:25.935 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-08-29 14:06:26.033015 | orchestrator | 14:06:26.032 STDOUT terraform: ci.auto.tfvars 2025-08-29 14:06:26.417416 | orchestrator | 14:06:26.417 STDOUT terraform: default_custom.tf 2025-08-29 14:06:26.556879 | orchestrator | 14:06:26.556 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-08-29 14:06:27.516252 | orchestrator | 14:06:27.516 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-08-29 14:06:28.098946 | orchestrator | 14:06:28.098 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-08-29 14:06:28.323698 | orchestrator | 14:06:28.318 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-08-29 14:06:28.323884 | orchestrator | 14:06:28.319 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-08-29 14:06:28.323893 | orchestrator | 14:06:28.319 STDOUT terraform:  + create 2025-08-29 14:06:28.323900 | orchestrator | 14:06:28.319 STDOUT terraform:  <= read (data resources) 2025-08-29 14:06:28.323907 | orchestrator | 14:06:28.319 STDOUT terraform: OpenTofu will perform the following actions: 2025-08-29 14:06:28.323912 | orchestrator | 14:06:28.319 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-08-29 14:06:28.323917 | orchestrator | 14:06:28.319 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 14:06:28.323923 | orchestrator | 14:06:28.319 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-08-29 14:06:28.323927 | orchestrator | 14:06:28.319 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 14:06:28.323932 | orchestrator | 14:06:28.319 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 14:06:28.323937 | orchestrator | 14:06:28.319 STDOUT terraform:  + file = (known after apply) 2025-08-29 14:06:28.323942 | orchestrator | 14:06:28.319 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.323947 | orchestrator | 14:06:28.319 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.323965 | orchestrator | 14:06:28.319 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 14:06:28.323970 | orchestrator | 14:06:28.319 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 14:06:28.323975 | orchestrator | 14:06:28.319 STDOUT terraform:  + most_recent = true 2025-08-29 14:06:28.323980 | orchestrator | 14:06:28.319 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:28.323985 | orchestrator | 14:06:28.319 STDOUT terraform:  + protected = (known after apply) 2025-08-29 14:06:28.323990 | orchestrator | 14:06:28.319 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.323995 | orchestrator | 14:06:28.319 STDOUT terraform:  + schema = (known after apply) 2025-08-29 14:06:28.324000 | orchestrator | 14:06:28.319 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 14:06:28.324005 | orchestrator | 14:06:28.319 STDOUT terraform:  + tags = (known after apply) 2025-08-29 14:06:28.324009 | orchestrator | 14:06:28.319 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 14:06:28.324014 | orchestrator | 14:06:28.319 STDOUT terraform:  } 2025-08-29 14:06:28.324023 | orchestrator | 14:06:28.319 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-08-29 14:06:28.324028 | orchestrator | 14:06:28.319 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 14:06:28.324033 | orchestrator | 14:06:28.319 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-08-29 14:06:28.324037 | orchestrator | 14:06:28.320 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 14:06:28.324042 | orchestrator | 14:06:28.320 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 14:06:28.324047 | orchestrator | 14:06:28.320 STDOUT terraform:  + file = (known after apply) 2025-08-29 14:06:28.324052 | orchestrator | 14:06:28.320 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.324056 | orchestrator | 14:06:28.320 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.324061 | orchestrator | 14:06:28.320 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 14:06:28.324066 | orchestrator | 14:06:28.320 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 14:06:28.324075 | orchestrator | 14:06:28.320 STDOUT terraform:  + most_recent = true 2025-08-29 14:06:28.324081 | orchestrator | 14:06:28.320 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:28.324085 | orchestrator | 14:06:28.320 STDOUT terraform:  + protected = (known after apply) 2025-08-29 14:06:28.324090 | orchestrator | 14:06:28.320 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.324108 | orchestrator | 14:06:28.320 STDOUT terraform:  + schema = (known after apply) 2025-08-29 14:06:28.324113 | orchestrator | 14:06:28.320 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 14:06:28.324118 | orchestrator | 14:06:28.320 STDOUT terraform:  + tags = (known after apply) 2025-08-29 14:06:28.324123 | orchestrator | 14:06:28.320 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 14:06:28.324128 | orchestrator | 14:06:28.320 STDOUT terraform:  } 2025-08-29 14:06:28.324133 | orchestrator | 14:06:28.320 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-08-29 14:06:28.324141 | orchestrator | 14:06:28.320 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-08-29 14:06:28.324146 | orchestrator | 14:06:28.320 STDOUT terraform:  + content = (known after apply) 2025-08-29 14:06:28.324151 | orchestrator | 14:06:28.320 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:06:28.324156 | orchestrator | 14:06:28.320 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:06:28.324161 | orchestrator | 14:06:28.320 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:06:28.324166 | orchestrator | 14:06:28.320 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:06:28.324171 | orchestrator | 14:06:28.320 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:06:28.324176 | orchestrator | 14:06:28.320 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:06:28.324181 | orchestrator | 14:06:28.320 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 14:06:28.324185 | orchestrator | 14:06:28.321 STDOUT terraform:  + file_permission = "0644" 2025-08-29 14:06:28.324190 | orchestrator | 14:06:28.321 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-08-29 14:06:28.324195 | orchestrator | 14:06:28.321 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.324200 | orchestrator | 14:06:28.321 STDOUT terraform:  } 2025-08-29 14:06:28.324205 | orchestrator | 14:06:28.321 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-08-29 14:06:28.324209 | orchestrator | 14:06:28.321 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-08-29 14:06:28.324214 | orchestrator | 14:06:28.321 STDOUT terraform:  + content = (known after apply) 2025-08-29 14:06:28.324219 | orchestrator | 14:06:28.321 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:06:28.324224 | orchestrator | 14:06:28.321 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:06:28.324228 | orchestrator | 14:06:28.321 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:06:28.324233 | orchestrator | 14:06:28.321 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:06:28.324238 | orchestrator | 14:06:28.321 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:06:28.324243 | orchestrator | 14:06:28.321 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:06:28.324248 | orchestrator | 14:06:28.321 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 14:06:28.324252 | orchestrator | 14:06:28.321 STDOUT terraform:  + file_permission = "0644" 2025-08-29 14:06:28.324257 | orchestrator | 14:06:28.321 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-08-29 14:06:28.324262 | orchestrator | 14:06:28.321 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.324267 | orchestrator | 14:06:28.321 STDOUT terraform:  } 2025-08-29 14:06:28.324274 | orchestrator | 14:06:28.321 STDOUT terraform:  # local_file.inventory will be created 2025-08-29 14:06:28.324279 | orchestrator | 14:06:28.321 STDOUT terraform:  + resource "local_file" "inventory" { 2025-08-29 14:06:28.324284 | orchestrator | 14:06:28.321 STDOUT terraform:  + content = (known after apply) 2025-08-29 14:06:28.324292 | orchestrator | 14:06:28.321 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:06:28.324297 | orchestrator | 14:06:28.321 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:06:28.324305 | orchestrator | 14:06:28.321 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:06:28.324310 | orchestrator | 14:06:28.321 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:06:28.324315 | orchestrator | 14:06:28.321 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:06:28.324320 | orchestrator | 14:06:28.321 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:06:28.324324 | orchestrator | 14:06:28.322 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 14:06:28.324329 | orchestrator | 14:06:28.322 STDOUT terraform:  + file_permission = "0644" 2025-08-29 14:06:28.324334 | orchestrator | 14:06:28.322 STDOUT terraform:  + filename = "inventory.ci" 2025-08-29 14:06:28.324339 | orchestrator | 14:06:28.322 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.324344 | orchestrator | 14:06:28.322 STDOUT terraform:  } 2025-08-29 14:06:28.324349 | orchestrator | 14:06:28.322 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-08-29 14:06:28.324354 | orchestrator | 14:06:28.322 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-08-29 14:06:28.324359 | orchestrator | 14:06:28.322 STDOUT terraform:  + content = (sensitive value) 2025-08-29 14:06:28.324364 | orchestrator | 14:06:28.322 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:06:28.324368 | orchestrator | 14:06:28.322 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:06:28.324373 | orchestrator | 14:06:28.322 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:06:28.324378 | orchestrator | 14:06:28.322 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:06:28.324383 | orchestrator | 14:06:28.322 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:06:28.324388 | orchestrator | 14:06:28.322 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:06:28.324392 | orchestrator | 14:06:28.322 STDOUT terraform:  + directory_permission = "0700" 2025-08-29 14:06:28.324397 | orchestrator | 14:06:28.322 STDOUT terraform:  + file_permission = "0600" 2025-08-29 14:06:28.324402 | orchestrator | 14:06:28.322 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-08-29 14:06:28.324407 | orchestrator | 14:06:28.322 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.324412 | orchestrator | 14:06:28.322 STDOUT terraform:  } 2025-08-29 14:06:28.324416 | orchestrator | 14:06:28.322 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-08-29 14:06:28.324421 | orchestrator | 14:06:28.322 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-08-29 14:06:28.324426 | orchestrator | 14:06:28.322 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.324431 | orchestrator | 14:06:28.322 STDOUT terraform:  } 2025-08-29 14:06:28.324436 | orchestrator | 14:06:28.322 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-08-29 14:06:28.324447 | orchestrator | 14:06:28.322 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-08-29 14:06:28.324452 | orchestrator | 14:06:28.322 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.324457 | orchestrator | 14:06:28.322 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.324462 | orchestrator | 14:06:28.322 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.324466 | orchestrator | 14:06:28.322 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.324471 | orchestrator | 14:06:28.323 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.324476 | orchestrator | 14:06:28.323 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-08-29 14:06:28.324481 | orchestrator | 14:06:28.323 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.324485 | orchestrator | 14:06:28.323 STDOUT terraform:  + size = 80 2025-08-29 14:06:28.324493 | orchestrator | 14:06:28.323 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.324498 | orchestrator | 14:06:28.323 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.324502 | orchestrator | 14:06:28.323 STDOUT terraform:  } 2025-08-29 14:06:28.324507 | orchestrator | 14:06:28.323 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-08-29 14:06:28.324512 | orchestrator | 14:06:28.323 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:06:28.324555 | orchestrator | 14:06:28.323 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.324560 | orchestrator | 14:06:28.323 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.324831 | orchestrator | 14:06:28.323 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.325429 | orchestrator | 14:06:28.324 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.325982 | orchestrator | 14:06:28.325 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.326569 | orchestrator | 14:06:28.326 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-08-29 14:06:28.326870 | orchestrator | 14:06:28.326 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.327159 | orchestrator | 14:06:28.326 STDOUT terraform:  + size = 80 2025-08-29 14:06:28.327556 | orchestrator | 14:06:28.327 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.327744 | orchestrator | 14:06:28.327 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.327836 | orchestrator | 14:06:28.327 STDOUT terraform:  } 2025-08-29 14:06:28.328132 | orchestrator | 14:06:28.327 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-08-29 14:06:28.328695 | orchestrator | 14:06:28.328 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:06:28.329050 | orchestrator | 14:06:28.328 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.329555 | orchestrator | 14:06:28.329 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.329610 | orchestrator | 14:06:28.329 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.329759 | orchestrator | 14:06:28.329 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.329820 | orchestrator | 14:06:28.329 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.330154 | orchestrator | 14:06:28.329 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-08-29 14:06:28.330223 | orchestrator | 14:06:28.330 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.330350 | orchestrator | 14:06:28.330 STDOUT terraform:  + size = 80 2025-08-29 14:06:28.330475 | orchestrator | 14:06:28.330 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.330599 | orchestrator | 14:06:28.330 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.330676 | orchestrator | 14:06:28.330 STDOUT terraform:  } 2025-08-29 14:06:28.330814 | orchestrator | 14:06:28.330 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-08-29 14:06:28.331037 | orchestrator | 14:06:28.330 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:06:28.331131 | orchestrator | 14:06:28.331 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.331274 | orchestrator | 14:06:28.331 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.331372 | orchestrator | 14:06:28.331 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.331544 | orchestrator | 14:06:28.331 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.331646 | orchestrator | 14:06:28.331 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.331716 | orchestrator | 14:06:28.331 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-08-29 14:06:28.331800 | orchestrator | 14:06:28.331 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.331989 | orchestrator | 14:06:28.331 STDOUT terraform:  + size = 80 2025-08-29 14:06:28.332086 | orchestrator | 14:06:28.332 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.332170 | orchestrator | 14:06:28.332 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.332323 | orchestrator | 14:06:28.332 STDOUT terraform:  } 2025-08-29 14:06:28.332422 | orchestrator | 14:06:28.332 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-08-29 14:06:28.332558 | orchestrator | 14:06:28.332 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:06:28.332611 | orchestrator | 14:06:28.332 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.332733 | orchestrator | 14:06:28.332 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.332847 | orchestrator | 14:06:28.332 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.332977 | orchestrator | 14:06:28.332 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.333086 | orchestrator | 14:06:28.333 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.333226 | orchestrator | 14:06:28.333 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-08-29 14:06:28.333312 | orchestrator | 14:06:28.333 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.333392 | orchestrator | 14:06:28.333 STDOUT terraform:  + size = 80 2025-08-29 14:06:28.333537 | orchestrator | 14:06:28.333 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.333621 | orchestrator | 14:06:28.333 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.333699 | orchestrator | 14:06:28.333 STDOUT terraform:  } 2025-08-29 14:06:28.333837 | orchestrator | 14:06:28.333 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-08-29 14:06:28.333977 | orchestrator | 14:06:28.333 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:06:28.334053 | orchestrator | 14:06:28.333 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.334204 | orchestrator | 14:06:28.334 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.334332 | orchestrator | 14:06:28.334 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.334571 | orchestrator | 14:06:28.334 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.334732 | orchestrator | 14:06:28.334 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.334952 | orchestrator | 14:06:28.334 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-08-29 14:06:28.335111 | orchestrator | 14:06:28.334 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.335220 | orchestrator | 14:06:28.335 STDOUT terraform:  + size = 80 2025-08-29 14:06:28.335289 | orchestrator | 14:06:28.335 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.335463 | orchestrator | 14:06:28.335 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.335498 | orchestrator | 14:06:28.335 STDOUT terraform:  } 2025-08-29 14:06:28.335669 | orchestrator | 14:06:28.335 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-08-29 14:06:28.335843 | orchestrator | 14:06:28.335 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:06:28.335920 | orchestrator | 14:06:28.335 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.336017 | orchestrator | 14:06:28.335 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.336181 | orchestrator | 14:06:28.336 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.336317 | orchestrator | 14:06:28.336 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.336496 | orchestrator | 14:06:28.336 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.336884 | orchestrator | 14:06:28.336 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-08-29 14:06:28.336986 | orchestrator | 14:06:28.336 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.337080 | orchestrator | 14:06:28.337 STDOUT terraform:  + size = 80 2025-08-29 14:06:28.337159 | orchestrator | 14:06:28.337 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.337236 | orchestrator | 14:06:28.337 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.337303 | orchestrator | 14:06:28.337 STDOUT terraform:  } 2025-08-29 14:06:28.337554 | orchestrator | 14:06:28.337 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-08-29 14:06:28.337626 | orchestrator | 14:06:28.337 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:28.337736 | orchestrator | 14:06:28.337 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.337853 | orchestrator | 14:06:28.337 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.337952 | orchestrator | 14:06:28.337 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.338103 | orchestrator | 14:06:28.337 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.338254 | orchestrator | 14:06:28.338 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-08-29 14:06:28.338442 | orchestrator | 14:06:28.338 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.338576 | orchestrator | 14:06:28.338 STDOUT terraform:  + size = 20 2025-08-29 14:06:28.338618 | orchestrator | 14:06:28.338 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.338708 | orchestrator | 14:06:28.338 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.338818 | orchestrator | 14:06:28.338 STDOUT terraform:  } 2025-08-29 14:06:28.339002 | orchestrator | 14:06:28.338 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-08-29 14:06:28.339233 | orchestrator | 14:06:28.339 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:28.339405 | orchestrator | 14:06:28.339 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.339538 | orchestrator | 14:06:28.339 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.339731 | orchestrator | 14:06:28.339 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.340314 | orchestrator | 14:06:28.340 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.340685 | orchestrator | 14:06:28.340 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-08-29 14:06:28.340838 | orchestrator | 14:06:28.340 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.342333 | orchestrator | 14:06:28.340 STDOUT terraform:  + size = 20 2025-08-29 14:06:28.342417 | orchestrator | 14:06:28.342 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.342570 | orchestrator | 14:06:28.342 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.342606 | orchestrator | 14:06:28.342 STDOUT terraform:  } 2025-08-29 14:06:28.342749 | orchestrator | 14:06:28.342 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-08-29 14:06:28.342820 | orchestrator | 14:06:28.342 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:28.342970 | orchestrator | 14:06:28.342 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.343033 | orchestrator | 14:06:28.343 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.343125 | orchestrator | 14:06:28.343 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.343276 | orchestrator | 14:06:28.343 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.343342 | orchestrator | 14:06:28.343 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-08-29 14:06:28.343430 | orchestrator | 14:06:28.343 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.343572 | orchestrator | 14:06:28.343 STDOUT terraform:  + size = 20 2025-08-29 14:06:28.343621 | orchestrator | 14:06:28.343 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.343662 | orchestrator | 14:06:28.343 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.343728 | orchestrator | 14:06:28.343 STDOUT terraform:  } 2025-08-29 14:06:28.343873 | orchestrator | 14:06:28.343 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-08-29 14:06:28.343938 | orchestrator | 14:06:28.343 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:28.344067 | orchestrator | 14:06:28.344 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.344213 | orchestrator | 14:06:28.344 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.344260 | orchestrator | 14:06:28.344 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.344304 | orchestrator | 14:06:28.344 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.344590 | orchestrator | 14:06:28.344 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-08-29 14:06:28.344642 | orchestrator | 14:06:28.344 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.344673 | orchestrator | 14:06:28.344 STDOUT terraform:  + size = 20 2025-08-29 14:06:28.344708 | orchestrator | 14:06:28.344 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.344742 | orchestrator | 14:06:28.344 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.344764 | orchestrator | 14:06:28.344 STDOUT terraform:  } 2025-08-29 14:06:28.344819 | orchestrator | 14:06:28.344 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-08-29 14:06:28.344873 | orchestrator | 14:06:28.344 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:28.344972 | orchestrator | 14:06:28.344 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.345283 | orchestrator | 14:06:28.345 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.345393 | orchestrator | 14:06:28.345 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.345500 | orchestrator | 14:06:28.345 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.345692 | orchestrator | 14:06:28.345 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-08-29 14:06:28.345841 | orchestrator | 14:06:28.345 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.346104 | orchestrator | 14:06:28.345 STDOUT terraform:  + size = 20 2025-08-29 14:06:28.346229 | orchestrator | 14:06:28.346 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.346593 | orchestrator | 14:06:28.346 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.346810 | orchestrator | 14:06:28.346 STDOUT terraform:  } 2025-08-29 14:06:28.347143 | orchestrator | 14:06:28.346 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-08-29 14:06:28.347319 | orchestrator | 14:06:28.347 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:28.347365 | orchestrator | 14:06:28.347 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.347398 | orchestrator | 14:06:28.347 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.347443 | orchestrator | 14:06:28.347 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.347493 | orchestrator | 14:06:28.347 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.347574 | orchestrator | 14:06:28.347 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-08-29 14:06:28.347620 | orchestrator | 14:06:28.347 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.347657 | orchestrator | 14:06:28.347 STDOUT terraform:  + size = 20 2025-08-29 14:06:28.347691 | orchestrator | 14:06:28.347 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.347723 | orchestrator | 14:06:28.347 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.347744 | orchestrator | 14:06:28.347 STDOUT terraform:  } 2025-08-29 14:06:28.347795 | orchestrator | 14:06:28.347 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-08-29 14:06:28.347848 | orchestrator | 14:06:28.347 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:28.347891 | orchestrator | 14:06:28.347 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.347923 | orchestrator | 14:06:28.347 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.347967 | orchestrator | 14:06:28.347 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.348010 | orchestrator | 14:06:28.347 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.348058 | orchestrator | 14:06:28.348 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-08-29 14:06:28.348102 | orchestrator | 14:06:28.348 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.348130 | orchestrator | 14:06:28.348 STDOUT terraform:  + size = 20 2025-08-29 14:06:28.348161 | orchestrator | 14:06:28.348 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.348193 | orchestrator | 14:06:28.348 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.348214 | orchestrator | 14:06:28.348 STDOUT terraform:  } 2025-08-29 14:06:28.348265 | orchestrator | 14:06:28.348 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-08-29 14:06:28.348333 | orchestrator | 14:06:28.348 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:28.348592 | orchestrator | 14:06:28.348 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.348717 | orchestrator | 14:06:28.348 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.348857 | orchestrator | 14:06:28.348 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.348974 | orchestrator | 14:06:28.348 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.349085 | orchestrator | 14:06:28.348 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-08-29 14:06:28.349120 | orchestrator | 14:06:28.349 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.349173 | orchestrator | 14:06:28.349 STDOUT terraform:  + size = 20 2025-08-29 14:06:28.349212 | orchestrator | 14:06:28.349 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.349286 | orchestrator | 14:06:28.349 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.349328 | orchestrator | 14:06:28.349 STDOUT terraform:  } 2025-08-29 14:06:28.349431 | orchestrator | 14:06:28.349 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-08-29 14:06:28.349628 | orchestrator | 14:06:28.349 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:06:28.349726 | orchestrator | 14:06:28.349 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:06:28.349750 | orchestrator | 14:06:28.349 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.349836 | orchestrator | 14:06:28.349 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.349888 | orchestrator | 14:06:28.349 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:06:28.349960 | orchestrator | 14:06:28.349 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-08-29 14:06:28.350059 | orchestrator | 14:06:28.349 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.350092 | orchestrator | 14:06:28.350 STDOUT terraform:  + size = 20 2025-08-29 14:06:28.350180 | orchestrator | 14:06:28.350 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:06:28.350307 | orchestrator | 14:06:28.350 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:06:28.350337 | orchestrator | 14:06:28.350 STDOUT terraform:  } 2025-08-29 14:06:28.350506 | orchestrator | 14:06:28.350 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-08-29 14:06:28.350536 | orchestrator | 14:06:28.350 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-08-29 14:06:28.350574 | orchestrator | 14:06:28.350 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:28.350639 | orchestrator | 14:06:28.350 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:28.350723 | orchestrator | 14:06:28.350 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:28.350816 | orchestrator | 14:06:28.350 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.350855 | orchestrator | 14:06:28.350 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.350914 | orchestrator | 14:06:28.350 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:28.350959 | orchestrator | 14:06:28.350 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:28.351050 | orchestrator | 14:06:28.350 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:28.351185 | orchestrator | 14:06:28.351 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-08-29 14:06:28.351247 | orchestrator | 14:06:28.351 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:28.351354 | orchestrator | 14:06:28.351 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:28.351464 | orchestrator | 14:06:28.351 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.351529 | orchestrator | 14:06:28.351 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.351649 | orchestrator | 14:06:28.351 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:28.351679 | orchestrator | 14:06:28.351 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:28.351847 | orchestrator | 14:06:28.351 STDOUT terraform:  + name = "testbed-manager" 2025-08-29 14:06:28.351973 | orchestrator | 14:06:28.351 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:28.352020 | orchestrator | 14:06:28.351 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.352084 | orchestrator | 14:06:28.352 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:28.352116 | orchestrator | 14:06:28.352 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:28.352201 | orchestrator | 14:06:28.352 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:28.352237 | orchestrator | 14:06:28.352 STDOUT terraform:  + user_data = (sensitive value) 2025-08-29 14:06:28.352279 | orchestrator | 14:06:28.352 STDOUT terraform:  + block_device { 2025-08-29 14:06:28.352312 | orchestrator | 14:06:28.352 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:28.352358 | orchestrator | 14:06:28.352 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:28.352419 | orchestrator | 14:06:28.352 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:28.352527 | orchestrator | 14:06:28.352 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:28.352603 | orchestrator | 14:06:28.352 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:28.352706 | orchestrator | 14:06:28.352 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.352770 | orchestrator | 14:06:28.352 STDOUT terraform:  } 2025-08-29 14:06:28.352778 | orchestrator | 14:06:28.352 STDOUT terraform:  + network { 2025-08-29 14:06:28.352797 | orchestrator | 14:06:28.352 STDOUT terraform:  + access_network = false 2025-08-29 14:06:28.352869 | orchestrator | 14:06:28.352 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:28.352938 | orchestrator | 14:06:28.352 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:28.353001 | orchestrator | 14:06:28.352 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:28.353168 | orchestrator | 14:06:28.352 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:28.353339 | orchestrator | 14:06:28.353 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:28.353382 | orchestrator | 14:06:28.353 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.353437 | orchestrator | 14:06:28.353 STDOUT terraform:  } 2025-08-29 14:06:28.353475 | orchestrator | 14:06:28.353 STDOUT terraform:  } 2025-08-29 14:06:28.353586 | orchestrator | 14:06:28.353 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-08-29 14:06:28.353735 | orchestrator | 14:06:28.353 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:06:28.353902 | orchestrator | 14:06:28.353 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:28.354045 | orchestrator | 14:06:28.353 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:28.354231 | orchestrator | 14:06:28.354 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:28.354306 | orchestrator | 14:06:28.354 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.354408 | orchestrator | 14:06:28.354 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.354510 | orchestrator | 14:06:28.354 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:28.354668 | orchestrator | 14:06:28.354 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:28.354708 | orchestrator | 14:06:28.354 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:28.354764 | orchestrator | 14:06:28.354 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:06:28.357078 | orchestrator | 14:06:28.354 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:28.357118 | orchestrator | 14:06:28.357 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:28.357123 | orchestrator | 14:06:28.357 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.357153 | orchestrator | 14:06:28.357 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.357189 | orchestrator | 14:06:28.357 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:28.357215 | orchestrator | 14:06:28.357 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:28.357246 | orchestrator | 14:06:28.357 STDOUT terraform:  + name = "testbed-node-0" 2025-08-29 14:06:28.357271 | orchestrator | 14:06:28.357 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:28.357307 | orchestrator | 14:06:28.357 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.357343 | orchestrator | 14:06:28.357 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:28.357367 | orchestrator | 14:06:28.357 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:28.357401 | orchestrator | 14:06:28.357 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:28.357451 | orchestrator | 14:06:28.357 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:06:28.357458 | orchestrator | 14:06:28.357 STDOUT terraform:  + block_device { 2025-08-29 14:06:28.357486 | orchestrator | 14:06:28.357 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:28.357525 | orchestrator | 14:06:28.357 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:28.357553 | orchestrator | 14:06:28.357 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:28.357580 | orchestrator | 14:06:28.357 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:28.357610 | orchestrator | 14:06:28.357 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:28.357650 | orchestrator | 14:06:28.357 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.357656 | orchestrator | 14:06:28.357 STDOUT terraform:  } 2025-08-29 14:06:28.357662 | orchestrator | 14:06:28.357 STDOUT terraform:  + network { 2025-08-29 14:06:28.357688 | orchestrator | 14:06:28.357 STDOUT terraform:  + access_network = false 2025-08-29 14:06:28.357719 | orchestrator | 14:06:28.357 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:28.357749 | orchestrator | 14:06:28.357 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:28.357782 | orchestrator | 14:06:28.357 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:28.357813 | orchestrator | 14:06:28.357 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:28.357844 | orchestrator | 14:06:28.357 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:28.357877 | orchestrator | 14:06:28.357 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.357883 | orchestrator | 14:06:28.357 STDOUT terraform:  } 2025-08-29 14:06:28.357898 | orchestrator | 14:06:28.357 STDOUT terraform:  } 2025-08-29 14:06:28.357941 | orchestrator | 14:06:28.357 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-08-29 14:06:28.357982 | orchestrator | 14:06:28.357 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:06:28.358029 | orchestrator | 14:06:28.357 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:28.358137 | orchestrator | 14:06:28.358 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:28.358144 | orchestrator | 14:06:28.358 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:28.358148 | orchestrator | 14:06:28.358 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.358157 | orchestrator | 14:06:28.358 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.358162 | orchestrator | 14:06:28.358 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:28.358208 | orchestrator | 14:06:28.358 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:28.358245 | orchestrator | 14:06:28.358 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:28.358296 | orchestrator | 14:06:28.358 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:06:28.358304 | orchestrator | 14:06:28.358 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:28.358316 | orchestrator | 14:06:28.358 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:28.358392 | orchestrator | 14:06:28.358 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.358397 | orchestrator | 14:06:28.358 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.358422 | orchestrator | 14:06:28.358 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:28.358434 | orchestrator | 14:06:28.358 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:28.358477 | orchestrator | 14:06:28.358 STDOUT terraform:  + name = "testbed-node-1" 2025-08-29 14:06:28.358484 | orchestrator | 14:06:28.358 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:28.358575 | orchestrator | 14:06:28.358 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.358581 | orchestrator | 14:06:28.358 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:28.358586 | orchestrator | 14:06:28.358 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:28.358653 | orchestrator | 14:06:28.358 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:28.358663 | orchestrator | 14:06:28.358 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:06:28.358718 | orchestrator | 14:06:28.358 STDOUT terraform:  + block_device { 2025-08-29 14:06:28.358723 | orchestrator | 14:06:28.358 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:28.358731 | orchestrator | 14:06:28.358 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:28.358780 | orchestrator | 14:06:28.358 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:28.358788 | orchestrator | 14:06:28.358 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:28.358821 | orchestrator | 14:06:28.358 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:28.358864 | orchestrator | 14:06:28.358 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.358869 | orchestrator | 14:06:28.358 STDOUT terraform:  } 2025-08-29 14:06:28.358874 | orchestrator | 14:06:28.358 STDOUT terraform:  + network { 2025-08-29 14:06:28.358900 | orchestrator | 14:06:28.358 STDOUT terraform:  + access_network = false 2025-08-29 14:06:28.358929 | orchestrator | 14:06:28.358 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:28.358954 | orchestrator | 14:06:28.358 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:28.359017 | orchestrator | 14:06:28.358 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:28.359022 | orchestrator | 14:06:28.358 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:28.359049 | orchestrator | 14:06:28.359 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:28.359106 | orchestrator | 14:06:28.359 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.359111 | orchestrator | 14:06:28.359 STDOUT terraform:  } 2025-08-29 14:06:28.359115 | orchestrator | 14:06:28.359 STDOUT terraform:  } 2025-08-29 14:06:28.359164 | orchestrator | 14:06:28.359 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-08-29 14:06:28.359171 | orchestrator | 14:06:28.359 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:06:28.359223 | orchestrator | 14:06:28.359 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:28.359233 | orchestrator | 14:06:28.359 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:28.359293 | orchestrator | 14:06:28.359 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:28.359300 | orchestrator | 14:06:28.359 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.359333 | orchestrator | 14:06:28.359 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.359340 | orchestrator | 14:06:28.359 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:28.359381 | orchestrator | 14:06:28.359 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:28.359415 | orchestrator | 14:06:28.359 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:28.359445 | orchestrator | 14:06:28.359 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:06:28.359468 | orchestrator | 14:06:28.359 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:28.359502 | orchestrator | 14:06:28.359 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:28.359550 | orchestrator | 14:06:28.359 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.359582 | orchestrator | 14:06:28.359 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.359617 | orchestrator | 14:06:28.359 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:28.359641 | orchestrator | 14:06:28.359 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:28.359672 | orchestrator | 14:06:28.359 STDOUT terraform:  + name = "testbed-node-2" 2025-08-29 14:06:28.359696 | orchestrator | 14:06:28.359 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:28.359733 | orchestrator | 14:06:28.359 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.359768 | orchestrator | 14:06:28.359 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:28.359790 | orchestrator | 14:06:28.359 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:28.359825 | orchestrator | 14:06:28.359 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:28.359874 | orchestrator | 14:06:28.359 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:06:28.359880 | orchestrator | 14:06:28.359 STDOUT terraform:  + block_device { 2025-08-29 14:06:28.359909 | orchestrator | 14:06:28.359 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:28.359937 | orchestrator | 14:06:28.359 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:28.359971 | orchestrator | 14:06:28.359 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:28.359997 | orchestrator | 14:06:28.359 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:28.360025 | orchestrator | 14:06:28.359 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:28.360065 | orchestrator | 14:06:28.360 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.360077 | orchestrator | 14:06:28.360 STDOUT terraform:  } 2025-08-29 14:06:28.360082 | orchestrator | 14:06:28.360 STDOUT terraform:  + network { 2025-08-29 14:06:28.360105 | orchestrator | 14:06:28.360 STDOUT terraform:  + access_network = false 2025-08-29 14:06:28.360138 | orchestrator | 14:06:28.360 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:28.360168 | orchestrator | 14:06:28.360 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:28.360200 | orchestrator | 14:06:28.360 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:28.360231 | orchestrator | 14:06:28.360 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:28.360262 | orchestrator | 14:06:28.360 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:28.360294 | orchestrator | 14:06:28.360 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.360300 | orchestrator | 14:06:28.360 STDOUT terraform:  } 2025-08-29 14:06:28.360325 | orchestrator | 14:06:28.360 STDOUT terraform:  } 2025-08-29 14:06:28.360361 | orchestrator | 14:06:28.360 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-08-29 14:06:28.360403 | orchestrator | 14:06:28.360 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:06:28.360438 | orchestrator | 14:06:28.360 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:28.360472 | orchestrator | 14:06:28.360 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:28.360507 | orchestrator | 14:06:28.360 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:28.360551 | orchestrator | 14:06:28.360 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.360574 | orchestrator | 14:06:28.360 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.360592 | orchestrator | 14:06:28.360 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:28.360627 | orchestrator | 14:06:28.360 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:28.360661 | orchestrator | 14:06:28.360 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:28.360690 | orchestrator | 14:06:28.360 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:06:28.360713 | orchestrator | 14:06:28.360 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:28.360748 | orchestrator | 14:06:28.360 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:28.360784 | orchestrator | 14:06:28.360 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.360819 | orchestrator | 14:06:28.360 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.360854 | orchestrator | 14:06:28.360 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:28.360878 | orchestrator | 14:06:28.360 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:28.360910 | orchestrator | 14:06:28.360 STDOUT terraform:  + name = "testbed-node-3" 2025-08-29 14:06:28.360936 | orchestrator | 14:06:28.360 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:28.360971 | orchestrator | 14:06:28.360 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.361005 | orchestrator | 14:06:28.360 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:28.361028 | orchestrator | 14:06:28.360 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:28.361063 | orchestrator | 14:06:28.361 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:28.361113 | orchestrator | 14:06:28.361 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:06:28.361119 | orchestrator | 14:06:28.361 STDOUT terraform:  + block_device { 2025-08-29 14:06:28.361149 | orchestrator | 14:06:28.361 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:28.361176 | orchestrator | 14:06:28.361 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:28.361205 | orchestrator | 14:06:28.361 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:28.361234 | orchestrator | 14:06:28.361 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:28.361265 | orchestrator | 14:06:28.361 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:28.361304 | orchestrator | 14:06:28.361 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.361310 | orchestrator | 14:06:28.361 STDOUT terraform:  } 2025-08-29 14:06:28.361332 | orchestrator | 14:06:28.361 STDOUT terraform:  + network { 2025-08-29 14:06:28.361338 | orchestrator | 14:06:28.361 STDOUT terraform:  + access_network = false 2025-08-29 14:06:28.361375 | orchestrator | 14:06:28.361 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:28.361406 | orchestrator | 14:06:28.361 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:28.361437 | orchestrator | 14:06:28.361 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:28.361468 | orchestrator | 14:06:28.361 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:28.361499 | orchestrator | 14:06:28.361 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:28.361540 | orchestrator | 14:06:28.361 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.361546 | orchestrator | 14:06:28.361 STDOUT terraform:  } 2025-08-29 14:06:28.361568 | orchestrator | 14:06:28.361 STDOUT terraform:  } 2025-08-29 14:06:28.361606 | orchestrator | 14:06:28.361 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-08-29 14:06:28.361647 | orchestrator | 14:06:28.361 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:06:28.361682 | orchestrator | 14:06:28.361 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:28.361717 | orchestrator | 14:06:28.361 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:28.361751 | orchestrator | 14:06:28.361 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:28.361787 | orchestrator | 14:06:28.361 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.361810 | orchestrator | 14:06:28.361 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.361836 | orchestrator | 14:06:28.361 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:28.361869 | orchestrator | 14:06:28.361 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:28.361901 | orchestrator | 14:06:28.361 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:28.361931 | orchestrator | 14:06:28.361 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:06:28.361954 | orchestrator | 14:06:28.361 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:28.361988 | orchestrator | 14:06:28.361 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:28.362027 | orchestrator | 14:06:28.361 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.362099 | orchestrator | 14:06:28.362 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.362108 | orchestrator | 14:06:28.362 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:28.362160 | orchestrator | 14:06:28.362 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:28.362171 | orchestrator | 14:06:28.362 STDOUT terraform:  + name = "testbed-node-4" 2025-08-29 14:06:28.362176 | orchestrator | 14:06:28.362 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:28.362225 | orchestrator | 14:06:28.362 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.362234 | orchestrator | 14:06:28.362 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:28.362292 | orchestrator | 14:06:28.362 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:28.362298 | orchestrator | 14:06:28.362 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:28.362349 | orchestrator | 14:06:28.362 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:06:28.362360 | orchestrator | 14:06:28.362 STDOUT terraform:  + block_device { 2025-08-29 14:06:28.362365 | orchestrator | 14:06:28.362 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:28.362419 | orchestrator | 14:06:28.362 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:28.362428 | orchestrator | 14:06:28.362 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:28.362459 | orchestrator | 14:06:28.362 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:28.362472 | orchestrator | 14:06:28.362 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:28.362539 | orchestrator | 14:06:28.362 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.362550 | orchestrator | 14:06:28.362 STDOUT terraform:  } 2025-08-29 14:06:28.362556 | orchestrator | 14:06:28.362 STDOUT terraform:  + network { 2025-08-29 14:06:28.362561 | orchestrator | 14:06:28.362 STDOUT terraform:  + access_network = false 2025-08-29 14:06:28.362648 | orchestrator | 14:06:28.362 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:28.362657 | orchestrator | 14:06:28.362 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:28.362661 | orchestrator | 14:06:28.362 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:28.362670 | orchestrator | 14:06:28.362 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:28.362746 | orchestrator | 14:06:28.362 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:28.362751 | orchestrator | 14:06:28.362 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.362755 | orchestrator | 14:06:28.362 STDOUT terraform:  } 2025-08-29 14:06:28.362760 | orchestrator | 14:06:28.362 STDOUT terraform:  } 2025-08-29 14:06:28.362792 | orchestrator | 14:06:28.362 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-08-29 14:06:28.362845 | orchestrator | 14:06:28.362 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:06:28.362906 | orchestrator | 14:06:28.362 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:06:28.362912 | orchestrator | 14:06:28.362 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:06:28.362950 | orchestrator | 14:06:28.362 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:06:28.362989 | orchestrator | 14:06:28.362 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.362998 | orchestrator | 14:06:28.362 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:06:28.363023 | orchestrator | 14:06:28.362 STDOUT terraform:  + config_drive = true 2025-08-29 14:06:28.363069 | orchestrator | 14:06:28.363 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:06:28.363099 | orchestrator | 14:06:28.363 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:06:28.363111 | orchestrator | 14:06:28.363 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:06:28.363151 | orchestrator | 14:06:28.363 STDOUT terraform:  + force_delete = false 2025-08-29 14:06:28.363199 | orchestrator | 14:06:28.363 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:06:28.363260 | orchestrator | 14:06:28.363 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.363266 | orchestrator | 14:06:28.363 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:06:28.363324 | orchestrator | 14:06:28.363 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:06:28.363336 | orchestrator | 14:06:28.363 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:06:28.363374 | orchestrator | 14:06:28.363 STDOUT terraform:  + name = "testbed-node-5" 2025-08-29 14:06:28.363380 | orchestrator | 14:06:28.363 STDOUT terraform:  + power_state = "active" 2025-08-29 14:06:28.363454 | orchestrator | 14:06:28.363 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.363459 | orchestrator | 14:06:28.363 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:06:28.363465 | orchestrator | 14:06:28.363 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:06:28.363537 | orchestrator | 14:06:28.363 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:06:28.363545 | orchestrator | 14:06:28.363 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:06:28.363588 | orchestrator | 14:06:28.363 STDOUT terraform:  + block_device { 2025-08-29 14:06:28.363600 | orchestrator | 14:06:28.363 STDOUT terraform:  + boot_index = 0 2025-08-29 14:06:28.363605 | orchestrator | 14:06:28.363 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:06:28.363641 | orchestrator | 14:06:28.363 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:06:28.363668 | orchestrator | 14:06:28.363 STDOUT terraform:  + multiattach = false 2025-08-29 14:06:28.363716 | orchestrator | 14:06:28.363 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:06:28.363723 | orchestrator | 14:06:28.363 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.363750 | orchestrator | 14:06:28.363 STDOUT terraform:  } 2025-08-29 14:06:28.363760 | orchestrator | 14:06:28.363 STDOUT terraform:  + network { 2025-08-29 14:06:28.363765 | orchestrator | 14:06:28.363 STDOUT terraform:  + access_network = false 2025-08-29 14:06:28.363834 | orchestrator | 14:06:28.363 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:06:28.363839 | orchestrator | 14:06:28.363 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:06:28.363862 | orchestrator | 14:06:28.363 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:06:28.363918 | orchestrator | 14:06:28.363 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:06:28.363926 | orchestrator | 14:06:28.363 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:06:28.363968 | orchestrator | 14:06:28.363 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:06:28.363973 | orchestrator | 14:06:28.363 STDOUT terraform:  } 2025-08-29 14:06:28.363977 | orchestrator | 14:06:28.363 STDOUT terraform:  } 2025-08-29 14:06:28.364039 | orchestrator | 14:06:28.363 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-08-29 14:06:28.364044 | orchestrator | 14:06:28.363 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-08-29 14:06:28.364049 | orchestrator | 14:06:28.364 STDOUT terraform:  + fingerprint = (known after apply) 2025-08-29 14:06:28.364108 | orchestrator | 14:06:28.364 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.364113 | orchestrator | 14:06:28.364 STDOUT terraform:  + name = "testbed" 2025-08-29 14:06:28.364118 | orchestrator | 14:06:28.364 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 14:06:28.364154 | orchestrator | 14:06:28.364 STDOUT terraform:  + public_key = (known after apply) 2025-08-29 14:06:28.364166 | orchestrator | 14:06:28.364 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.364211 | orchestrator | 14:06:28.364 STDOUT terraform:  + user_id = (known after apply) 2025-08-29 14:06:28.364216 | orchestrator | 14:06:28.364 STDOUT terraform:  } 2025-08-29 14:06:28.364289 | orchestrator | 14:06:28.364 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-08-29 14:06:28.364296 | orchestrator | 14:06:28.364 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:28.364349 | orchestrator | 14:06:28.364 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:28.364356 | orchestrator | 14:06:28.364 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.364408 | orchestrator | 14:06:28.364 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:28.364413 | orchestrator | 14:06:28.364 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.364418 | orchestrator | 14:06:28.364 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:28.364468 | orchestrator | 14:06:28.364 STDOUT terraform:  } 2025-08-29 14:06:28.364492 | orchestrator | 14:06:28.364 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-08-29 14:06:28.364560 | orchestrator | 14:06:28.364 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:28.364572 | orchestrator | 14:06:28.364 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:28.364617 | orchestrator | 14:06:28.364 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.364622 | orchestrator | 14:06:28.364 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:28.364648 | orchestrator | 14:06:28.364 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.364697 | orchestrator | 14:06:28.364 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:28.364703 | orchestrator | 14:06:28.364 STDOUT terraform:  } 2025-08-29 14:06:28.364733 | orchestrator | 14:06:28.364 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-08-29 14:06:28.365733 | orchestrator | 14:06:28.365 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:28.365739 | orchestrator | 14:06:28.365 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:28.365743 | orchestrator | 14:06:28.365 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.365747 | orchestrator | 14:06:28.365 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:28.365751 | orchestrator | 14:06:28.365 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.365754 | orchestrator | 14:06:28.365 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:28.365758 | orchestrator | 14:06:28.365 STDOUT terraform:  } 2025-08-29 14:06:28.365797 | orchestrator | 14:06:28.365 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-08-29 14:06:28.365867 | orchestrator | 14:06:28.365 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:28.365873 | orchestrator | 14:06:28.365 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:28.365878 | orchestrator | 14:06:28.365 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.365913 | orchestrator | 14:06:28.365 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:28.365925 | orchestrator | 14:06:28.365 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.365982 | orchestrator | 14:06:28.365 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:28.365993 | orchestrator | 14:06:28.365 STDOUT terraform:  } 2025-08-29 14:06:28.366032 | orchestrator | 14:06:28.365 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-08-29 14:06:28.366090 | orchestrator | 14:06:28.366 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:28.366128 | orchestrator | 14:06:28.366 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:28.366137 | orchestrator | 14:06:28.366 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.366189 | orchestrator | 14:06:28.366 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:28.366194 | orchestrator | 14:06:28.366 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.366219 | orchestrator | 14:06:28.366 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:28.366228 | orchestrator | 14:06:28.366 STDOUT terraform:  } 2025-08-29 14:06:28.366303 | orchestrator | 14:06:28.366 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-08-29 14:06:28.366310 | orchestrator | 14:06:28.366 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:28.366412 | orchestrator | 14:06:28.366 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:28.366417 | orchestrator | 14:06:28.366 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.366421 | orchestrator | 14:06:28.366 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:28.366426 | orchestrator | 14:06:28.366 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.366454 | orchestrator | 14:06:28.366 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:28.366459 | orchestrator | 14:06:28.366 STDOUT terraform:  } 2025-08-29 14:06:28.366573 | orchestrator | 14:06:28.366 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-08-29 14:06:28.366581 | orchestrator | 14:06:28.366 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:28.366608 | orchestrator | 14:06:28.366 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:28.366639 | orchestrator | 14:06:28.366 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.366692 | orchestrator | 14:06:28.366 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:28.366700 | orchestrator | 14:06:28.366 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.366705 | orchestrator | 14:06:28.366 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:28.366729 | orchestrator | 14:06:28.366 STDOUT terraform:  } 2025-08-29 14:06:28.366793 | orchestrator | 14:06:28.366 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-08-29 14:06:28.366832 | orchestrator | 14:06:28.366 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:28.366839 | orchestrator | 14:06:28.366 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:28.366902 | orchestrator | 14:06:28.366 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.366908 | orchestrator | 14:06:28.366 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:28.366962 | orchestrator | 14:06:28.366 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.366972 | orchestrator | 14:06:28.366 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:28.366976 | orchestrator | 14:06:28.366 STDOUT terraform:  } 2025-08-29 14:06:28.367027 | orchestrator | 14:06:28.366 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-08-29 14:06:28.367171 | orchestrator | 14:06:28.367 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:06:28.367180 | orchestrator | 14:06:28.367 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:06:28.367184 | orchestrator | 14:06:28.367 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.367188 | orchestrator | 14:06:28.367 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:06:28.367192 | orchestrator | 14:06:28.367 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.367197 | orchestrator | 14:06:28.367 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:06:28.367201 | orchestrator | 14:06:28.367 STDOUT terraform:  } 2025-08-29 14:06:28.367265 | orchestrator | 14:06:28.367 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-08-29 14:06:28.367321 | orchestrator | 14:06:28.367 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-08-29 14:06:28.367327 | orchestrator | 14:06:28.367 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 14:06:28.367395 | orchestrator | 14:06:28.367 STDOUT terraform:  + floating_ip = (known after apply) 2025-08-29 14:06:28.367403 | orchestrator | 14:06:28.367 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.367408 | orchestrator | 14:06:28.367 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 14:06:28.367434 | orchestrator | 14:06:28.367 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.367442 | orchestrator | 14:06:28.367 STDOUT terraform:  } 2025-08-29 14:06:28.367523 | orchestrator | 14:06:28.367 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-08-29 14:06:28.367531 | orchestrator | 14:06:28.367 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-08-29 14:06:28.367566 | orchestrator | 14:06:28.367 STDOUT terraform:  + address = (known after apply) 2025-08-29 14:06:28.367573 | orchestrator | 14:06:28.367 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.367630 | orchestrator | 14:06:28.367 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 14:06:28.367638 | orchestrator | 14:06:28.367 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:28.367643 | orchestrator | 14:06:28.367 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 14:06:28.367668 | orchestrator | 14:06:28.367 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.367692 | orchestrator | 14:06:28.367 STDOUT terraform:  + pool = "public" 2025-08-29 14:06:28.367718 | orchestrator | 14:06:28.367 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 14:06:28.367744 | orchestrator | 14:06:28.367 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.367782 | orchestrator | 14:06:28.367 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:28.367792 | orchestrator | 14:06:28.367 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.367796 | orchestrator | 14:06:28.367 STDOUT terraform:  } 2025-08-29 14:06:28.367881 | orchestrator | 14:06:28.367 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-08-29 14:06:28.367890 | orchestrator | 14:06:28.367 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-08-29 14:06:28.367914 | orchestrator | 14:06:28.367 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:28.367958 | orchestrator | 14:06:28.367 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.367970 | orchestrator | 14:06:28.367 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 14:06:28.367975 | orchestrator | 14:06:28.367 STDOUT terraform:  + "nova", 2025-08-29 14:06:28.368020 | orchestrator | 14:06:28.367 STDOUT terraform:  ] 2025-08-29 14:06:28.368027 | orchestrator | 14:06:28.367 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 14:06:28.368088 | orchestrator | 14:06:28.368 STDOUT terraform:  + external = (known after apply) 2025-08-29 14:06:28.368117 | orchestrator | 14:06:28.368 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.368163 | orchestrator | 14:06:28.368 STDOUT terraform:  + mtu = (known after apply) 2025-08-29 14:06:28.368195 | orchestrator | 14:06:28.368 STDOUT terraform:  + name = "net-testbed-management" 2025-08-29 14:06:28.368245 | orchestrator | 14:06:28.368 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:28.368251 | orchestrator | 14:06:28.368 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:28.368294 | orchestrator | 14:06:28.368 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.368360 | orchestrator | 14:06:28.368 STDOUT terraform:  + shared = (known after apply) 2025-08-29 14:06:28.368385 | orchestrator | 14:06:28.368 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.368421 | orchestrator | 14:06:28.368 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-08-29 14:06:28.368458 | orchestrator | 14:06:28.368 STDOUT terraform:  + segments (known after apply) 2025-08-29 14:06:28.368462 | orchestrator | 14:06:28.368 STDOUT terraform:  } 2025-08-29 14:06:28.368466 | orchestrator | 14:06:28.368 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-08-29 14:06:28.368583 | orchestrator | 14:06:28.368 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-08-29 14:06:28.368916 | orchestrator | 14:06:28.368 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:28.369162 | orchestrator | 14:06:28.368 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:28.369295 | orchestrator | 14:06:28.368 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:28.369573 | orchestrator | 14:06:28.368 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.369682 | orchestrator | 14:06:28.368 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:28.370170 | orchestrator | 14:06:28.368 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:28.370410 | orchestrator | 14:06:28.368 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:28.370622 | orchestrator | 14:06:28.368 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:28.370743 | orchestrator | 14:06:28.368 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.370934 | orchestrator | 14:06:28.368 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:28.371013 | orchestrator | 14:06:28.368 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:28.371209 | orchestrator | 14:06:28.368 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:28.371295 | orchestrator | 14:06:28.368 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:28.371806 | orchestrator | 14:06:28.368 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.372010 | orchestrator | 14:06:28.368 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:28.372117 | orchestrator | 14:06:28.368 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.372192 | orchestrator | 14:06:28.369 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.372681 | orchestrator | 14:06:28.369 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:28.373263 | orchestrator | 14:06:28.369 STDOUT terraform:  } 2025-08-29 14:06:28.373269 | orchestrator | 14:06:28.369 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.373529 | orchestrator | 14:06:28.369 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:28.373534 | orchestrator | 14:06:28.369 STDOUT terraform:  } 2025-08-29 14:06:28.373538 | orchestrator | 14:06:28.369 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:28.373542 | orchestrator | 14:06:28.369 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:28.373546 | orchestrator | 14:06:28.369 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-08-29 14:06:28.374052 | orchestrator | 14:06:28.369 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:28.374058 | orchestrator | 14:06:28.369 STDOUT terraform:  } 2025-08-29 14:06:28.374062 | orchestrator | 14:06:28.369 STDOUT terraform:  } 2025-08-29 14:06:28.374066 | orchestrator | 14:06:28.369 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-08-29 14:06:28.374070 | orchestrator | 14:06:28.369 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:06:28.374074 | orchestrator | 14:06:28.369 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:28.374078 | orchestrator | 14:06:28.369 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:28.374082 | orchestrator | 14:06:28.369 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:28.374085 | orchestrator | 14:06:28.369 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.374089 | orchestrator | 14:06:28.369 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:28.374097 | orchestrator | 14:06:28.369 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:28.374101 | orchestrator | 14:06:28.369 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:28.374105 | orchestrator | 14:06:28.369 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:28.374109 | orchestrator | 14:06:28.369 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.374116 | orchestrator | 14:06:28.369 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:28.374120 | orchestrator | 14:06:28.369 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:28.374124 | orchestrator | 14:06:28.369 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:28.374128 | orchestrator | 14:06:28.369 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:28.374131 | orchestrator | 14:06:28.369 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.374135 | orchestrator | 14:06:28.369 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:28.374139 | orchestrator | 14:06:28.369 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.374142 | orchestrator | 14:06:28.369 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374146 | orchestrator | 14:06:28.369 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:28.374150 | orchestrator | 14:06:28.369 STDOUT terraform:  } 2025-08-29 14:06:28.374154 | orchestrator | 14:06:28.369 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374158 | orchestrator | 14:06:28.369 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:06:28.374161 | orchestrator | 14:06:28.369 STDOUT terraform:  } 2025-08-29 14:06:28.374165 | orchestrator | 14:06:28.369 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374169 | orchestrator | 14:06:28.369 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:28.374173 | orchestrator | 14:06:28.369 STDOUT terraform:  } 2025-08-29 14:06:28.374177 | orchestrator | 14:06:28.369 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374180 | orchestrator | 14:06:28.369 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:06:28.374184 | orchestrator | 14:06:28.369 STDOUT terraform:  } 2025-08-29 14:06:28.374188 | orchestrator | 14:06:28.369 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:28.374192 | orchestrator | 14:06:28.370 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:28.374196 | orchestrator | 14:06:28.370 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-08-29 14:06:28.374199 | orchestrator | 14:06:28.370 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:28.374203 | orchestrator | 14:06:28.370 STDOUT terraform:  } 2025-08-29 14:06:28.374207 | orchestrator | 14:06:28.370 STDOUT terraform:  } 2025-08-29 14:06:28.374215 | orchestrator | 14:06:28.370 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-08-29 14:06:28.374219 | orchestrator | 14:06:28.370 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:06:28.374226 | orchestrator | 14:06:28.370 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:28.374230 | orchestrator | 14:06:28.370 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:28.374233 | orchestrator | 14:06:28.370 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:28.374240 | orchestrator | 14:06:28.370 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.374244 | orchestrator | 14:06:28.370 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:28.374247 | orchestrator | 14:06:28.370 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:28.374251 | orchestrator | 14:06:28.370 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:28.374255 | orchestrator | 14:06:28.370 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:28.374259 | orchestrator | 14:06:28.370 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.374263 | orchestrator | 14:06:28.370 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:28.374266 | orchestrator | 14:06:28.370 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:28.374270 | orchestrator | 14:06:28.370 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:28.374274 | orchestrator | 14:06:28.370 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:28.374278 | orchestrator | 14:06:28.370 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.374281 | orchestrator | 14:06:28.370 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:28.374285 | orchestrator | 14:06:28.370 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.374289 | orchestrator | 14:06:28.370 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374293 | orchestrator | 14:06:28.370 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:28.374296 | orchestrator | 14:06:28.370 STDOUT terraform:  } 2025-08-29 14:06:28.374300 | orchestrator | 14:06:28.370 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374304 | orchestrator | 14:06:28.370 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:06:28.374308 | orchestrator | 14:06:28.370 STDOUT terraform:  } 2025-08-29 14:06:28.374312 | orchestrator | 14:06:28.370 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374316 | orchestrator | 14:06:28.370 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:28.374319 | orchestrator | 14:06:28.370 STDOUT terraform:  } 2025-08-29 14:06:28.374323 | orchestrator | 14:06:28.370 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374327 | orchestrator | 14:06:28.370 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:06:28.374331 | orchestrator | 14:06:28.370 STDOUT terraform:  } 2025-08-29 14:06:28.374335 | orchestrator | 14:06:28.370 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:28.374338 | orchestrator | 14:06:28.370 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:28.374345 | orchestrator | 14:06:28.370 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-08-29 14:06:28.374349 | orchestrator | 14:06:28.370 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:28.374353 | orchestrator | 14:06:28.371 STDOUT terraform:  } 2025-08-29 14:06:28.374357 | orchestrator | 14:06:28.371 STDOUT terraform:  } 2025-08-29 14:06:28.374361 | orchestrator | 14:06:28.371 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-08-29 14:06:28.374365 | orchestrator | 14:06:28.371 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:06:28.374373 | orchestrator | 14:06:28.371 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:28.374377 | orchestrator | 14:06:28.371 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:28.374381 | orchestrator | 14:06:28.371 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:28.374385 | orchestrator | 14:06:28.371 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.374388 | orchestrator | 14:06:28.371 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:28.374392 | orchestrator | 14:06:28.371 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:28.374396 | orchestrator | 14:06:28.371 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:28.374400 | orchestrator | 14:06:28.371 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:28.374403 | orchestrator | 14:06:28.371 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.374407 | orchestrator | 14:06:28.371 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:28.374411 | orchestrator | 14:06:28.371 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:28.374415 | orchestrator | 14:06:28.371 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:28.374418 | orchestrator | 14:06:28.371 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:28.374422 | orchestrator | 14:06:28.371 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.374426 | orchestrator | 14:06:28.371 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:28.374429 | orchestrator | 14:06:28.371 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.374433 | orchestrator | 14:06:28.371 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374437 | orchestrator | 14:06:28.371 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:28.374441 | orchestrator | 14:06:28.371 STDOUT terraform:  } 2025-08-29 14:06:28.374445 | orchestrator | 14:06:28.371 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374449 | orchestrator | 14:06:28.371 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:06:28.374452 | orchestrator | 14:06:28.371 STDOUT terraform:  } 2025-08-29 14:06:28.374456 | orchestrator | 14:06:28.371 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374460 | orchestrator | 14:06:28.371 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:28.374466 | orchestrator | 14:06:28.371 STDOUT terraform:  } 2025-08-29 14:06:28.374470 | orchestrator | 14:06:28.371 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374474 | orchestrator | 14:06:28.371 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:06:28.374478 | orchestrator | 14:06:28.371 STDOUT terraform:  } 2025-08-29 14:06:28.374481 | orchestrator | 14:06:28.371 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:28.374485 | orchestrator | 14:06:28.371 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:28.374489 | orchestrator | 14:06:28.371 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-08-29 14:06:28.374493 | orchestrator | 14:06:28.371 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:28.374497 | orchestrator | 14:06:28.371 STDOUT terraform:  } 2025-08-29 14:06:28.374501 | orchestrator | 14:06:28.371 STDOUT terraform:  } 2025-08-29 14:06:28.374505 | orchestrator | 14:06:28.371 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-08-29 14:06:28.374508 | orchestrator | 14:06:28.371 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:06:28.374512 | orchestrator | 14:06:28.372 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:28.374535 | orchestrator | 14:06:28.372 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:28.374539 | orchestrator | 14:06:28.372 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:28.374546 | orchestrator | 14:06:28.372 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.374550 | orchestrator | 14:06:28.372 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:28.374554 | orchestrator | 14:06:28.372 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:28.374560 | orchestrator | 14:06:28.372 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:28.374564 | orchestrator | 14:06:28.372 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:28.374570 | orchestrator | 14:06:28.372 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.374574 | orchestrator | 14:06:28.372 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:28.374578 | orchestrator | 14:06:28.372 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:28.374581 | orchestrator | 14:06:28.372 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:28.374585 | orchestrator | 14:06:28.372 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:28.374589 | orchestrator | 14:06:28.372 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.374593 | orchestrator | 14:06:28.372 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:28.374596 | orchestrator | 14:06:28.372 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.374600 | orchestrator | 14:06:28.372 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374604 | orchestrator | 14:06:28.372 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:28.374610 | orchestrator | 14:06:28.372 STDOUT terraform:  } 2025-08-29 14:06:28.374614 | orchestrator | 14:06:28.372 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374618 | orchestrator | 14:06:28.372 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:06:28.374622 | orchestrator | 14:06:28.372 STDOUT terraform:  } 2025-08-29 14:06:28.374625 | orchestrator | 14:06:28.372 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374629 | orchestrator | 14:06:28.372 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:28.374633 | orchestrator | 14:06:28.372 STDOUT terraform:  } 2025-08-29 14:06:28.374637 | orchestrator | 14:06:28.372 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374640 | orchestrator | 14:06:28.372 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:06:28.374644 | orchestrator | 14:06:28.372 STDOUT terraform:  } 2025-08-29 14:06:28.374648 | orchestrator | 14:06:28.372 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:28.374652 | orchestrator | 14:06:28.372 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:28.374655 | orchestrator | 14:06:28.372 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-08-29 14:06:28.374659 | orchestrator | 14:06:28.372 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:28.374663 | orchestrator | 14:06:28.372 STDOUT terraform:  } 2025-08-29 14:06:28.374667 | orchestrator | 14:06:28.372 STDOUT terraform:  } 2025-08-29 14:06:28.374671 | orchestrator | 14:06:28.372 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-08-29 14:06:28.374675 | orchestrator | 14:06:28.372 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:06:28.374679 | orchestrator | 14:06:28.372 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:28.374682 | orchestrator | 14:06:28.372 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:28.374686 | orchestrator | 14:06:28.372 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:28.374690 | orchestrator | 14:06:28.372 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.374694 | orchestrator | 14:06:28.372 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:28.374700 | orchestrator | 14:06:28.373 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:28.374704 | orchestrator | 14:06:28.373 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:28.374708 | orchestrator | 14:06:28.373 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:06:28.374711 | orchestrator | 14:06:28.373 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.374715 | orchestrator | 14:06:28.373 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:28.374719 | orchestrator | 14:06:28.373 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:28.374725 | orchestrator | 14:06:28.373 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:28.374732 | orchestrator | 14:06:28.373 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:28.374735 | orchestrator | 14:06:28.373 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.374739 | orchestrator | 14:06:28.373 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:28.374743 | orchestrator | 14:06:28.373 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.374747 | orchestrator | 14:06:28.373 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374750 | orchestrator | 14:06:28.373 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:28.374754 | orchestrator | 14:06:28.373 STDOUT terraform:  } 2025-08-29 14:06:28.374758 | orchestrator | 14:06:28.373 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374762 | orchestrator | 14:06:28.373 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:06:28.374765 | orchestrator | 14:06:28.373 STDOUT terraform:  } 2025-08-29 14:06:28.374769 | orchestrator | 14:06:28.373 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374773 | orchestrator | 14:06:28.373 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:28.374777 | orchestrator | 14:06:28.373 STDOUT terraform:  } 2025-08-29 14:06:28.374781 | orchestrator | 14:06:28.373 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374785 | orchestrator | 14:06:28.373 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:06:28.374788 | orchestrator | 14:06:28.373 STDOUT terraform:  } 2025-08-29 14:06:28.374792 | orchestrator | 14:06:28.373 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:28.374796 | orchestrator | 14:06:28.373 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:28.374800 | orchestrator | 14:06:28.373 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-08-29 14:06:28.374804 | orchestrator | 14:06:28.373 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:28.374808 | orchestrator | 14:06:28.373 STDOUT terraform:  } 2025-08-29 14:06:28.374811 | orchestrator | 14:06:28.373 STDOUT terraform:  } 2025-08-29 14:06:28.374815 | orchestrator | 14:06:28.373 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-08-29 14:06:28.374819 | orchestrator | 14:06:28.373 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:06:28.374823 | orchestrator | 14:06:28.373 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:28.374827 | orchestrator | 14:06:28.373 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:06:28.374831 | orchestrator | 14:06:28.373 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:06:28.374835 | orchestrator | 14:06:28.373 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.374838 | orchestrator | 14:06:28.373 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:06:28.374842 | orchestrator | 14:06:28.374 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:06:28.374846 | orchestrator | 14:06:28.374 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:06:28.374852 | orchestrator | 14:06:28.374 STDOUT terraform:  + dns_n 2025-08-29 14:06:28.374859 | orchestrator | 14:06:28.374 STDOUT terraform: ame = (known after apply) 2025-08-29 14:06:28.374863 | orchestrator | 14:06:28.374 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.374867 | orchestrator | 14:06:28.374 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:06:28.374870 | orchestrator | 14:06:28.374 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:28.374874 | orchestrator | 14:06:28.374 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:06:28.374878 | orchestrator | 14:06:28.374 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:06:28.374882 | orchestrator | 14:06:28.374 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.374886 | orchestrator | 14:06:28.374 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:06:28.374889 | orchestrator | 14:06:28.374 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.374893 | orchestrator | 14:06:28.374 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374897 | orchestrator | 14:06:28.374 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:06:28.374901 | orchestrator | 14:06:28.374 STDOUT terraform:  } 2025-08-29 14:06:28.374905 | orchestrator | 14:06:28.374 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374909 | orchestrator | 14:06:28.374 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:06:28.374913 | orchestrator | 14:06:28.374 STDOUT terraform:  } 2025-08-29 14:06:28.374917 | orchestrator | 14:06:28.374 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374921 | orchestrator | 14:06:28.374 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:06:28.374925 | orchestrator | 14:06:28.374 STDOUT terraform:  } 2025-08-29 14:06:28.374928 | orchestrator | 14:06:28.374 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:06:28.374932 | orchestrator | 14:06:28.374 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:06:28.374936 | orchestrator | 14:06:28.374 STDOUT terraform:  } 2025-08-29 14:06:28.374940 | orchestrator | 14:06:28.374 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:06:28.374943 | orchestrator | 14:06:28.374 STDOUT terraform:  + fixed_ip { 2025-08-29 14:06:28.374947 | orchestrator | 14:06:28.374 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-08-29 14:06:28.374951 | orchestrator | 14:06:28.374 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:28.374955 | orchestrator | 14:06:28.374 STDOUT terraform:  } 2025-08-29 14:06:28.374958 | orchestrator | 14:06:28.374 STDOUT terraform:  } 2025-08-29 14:06:28.374962 | orchestrator | 14:06:28.374 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-08-29 14:06:28.374968 | orchestrator | 14:06:28.374 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-08-29 14:06:28.374972 | orchestrator | 14:06:28.374 STDOUT terraform:  + force_destroy = false 2025-08-29 14:06:28.374975 | orchestrator | 14:06:28.374 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.374982 | orchestrator | 14:06:28.374 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 14:06:28.374987 | orchestrator | 14:06:28.374 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.375025 | orchestrator | 14:06:28.374 STDOUT terraform:  + router_id = (known after apply) 2025-08-29 14:06:28.375056 | orchestrator | 14:06:28.375 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:06:28.375062 | orchestrator | 14:06:28.375 STDOUT terraform:  } 2025-08-29 14:06:28.375102 | orchestrator | 14:06:28.375 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-08-29 14:06:28.375137 | orchestrator | 14:06:28.375 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-08-29 14:06:28.375188 | orchestrator | 14:06:28.375 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:06:28.375225 | orchestrator | 14:06:28.375 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.375243 | orchestrator | 14:06:28.375 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 14:06:28.375249 | orchestrator | 14:06:28.375 STDOUT terraform:  + "nova", 2025-08-29 14:06:28.375270 | orchestrator | 14:06:28.375 STDOUT terraform:  ] 2025-08-29 14:06:28.375301 | orchestrator | 14:06:28.375 STDOUT terraform:  + distributed = (known after apply) 2025-08-29 14:06:28.375416 | orchestrator | 14:06:28.375 STDOUT terraform:  + enable_snat = (known after apply) 2025-08-29 14:06:28.375469 | orchestrator | 14:06:28.375 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-08-29 14:06:28.375508 | orchestrator | 14:06:28.375 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-08-29 14:06:28.375564 | orchestrator | 14:06:28.375 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.375594 | orchestrator | 14:06:28.375 STDOUT terraform:  + name = "testbed" 2025-08-29 14:06:28.375634 | orchestrator | 14:06:28.375 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.375671 | orchestrator | 14:06:28.375 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.375700 | orchestrator | 14:06:28.375 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-08-29 14:06:28.375705 | orchestrator | 14:06:28.375 STDOUT terraform:  } 2025-08-29 14:06:28.375763 | orchestrator | 14:06:28.375 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-08-29 14:06:28.375818 | orchestrator | 14:06:28.375 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-08-29 14:06:28.375843 | orchestrator | 14:06:28.375 STDOUT terraform:  + description = "ssh" 2025-08-29 14:06:28.375872 | orchestrator | 14:06:28.375 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:28.375897 | orchestrator | 14:06:28.375 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:28.375934 | orchestrator | 14:06:28.375 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.375959 | orchestrator | 14:06:28.375 STDOUT terraform:  + port_range_max = 22 2025-08-29 14:06:28.375970 | orchestrator | 14:06:28.375 STDOUT terraform:  + port_range_min = 22 2025-08-29 14:06:28.376002 | orchestrator | 14:06:28.375 STDOUT terraform:  + protocol = "tcp" 2025-08-29 14:06:28.376039 | orchestrator | 14:06:28.375 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.376075 | orchestrator | 14:06:28.376 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:28.376111 | orchestrator | 14:06:28.376 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:28.376140 | orchestrator | 14:06:28.376 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:28.376176 | orchestrator | 14:06:28.376 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:28.376212 | orchestrator | 14:06:28.376 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.376217 | orchestrator | 14:06:28.376 STDOUT terraform:  } 2025-08-29 14:06:28.376272 | orchestrator | 14:06:28.376 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-08-29 14:06:28.376325 | orchestrator | 14:06:28.376 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-08-29 14:06:28.376353 | orchestrator | 14:06:28.376 STDOUT terraform:  + description = "wireguard" 2025-08-29 14:06:28.376382 | orchestrator | 14:06:28.376 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:28.376407 | orchestrator | 14:06:28.376 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:28.376444 | orchestrator | 14:06:28.376 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.376469 | orchestrator | 14:06:28.376 STDOUT terraform:  + port_range_max = 51820 2025-08-29 14:06:28.376494 | orchestrator | 14:06:28.376 STDOUT terraform:  + port_range_min = 51820 2025-08-29 14:06:28.376527 | orchestrator | 14:06:28.376 STDOUT terraform:  + protocol = "udp" 2025-08-29 14:06:28.376564 | orchestrator | 14:06:28.376 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.376599 | orchestrator | 14:06:28.376 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:28.376636 | orchestrator | 14:06:28.376 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:28.376666 | orchestrator | 14:06:28.376 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:28.376705 | orchestrator | 14:06:28.376 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:28.376738 | orchestrator | 14:06:28.376 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.376743 | orchestrator | 14:06:28.376 STDOUT terraform:  } 2025-08-29 14:06:28.376797 | orchestrator | 14:06:28.376 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-08-29 14:06:28.376850 | orchestrator | 14:06:28.376 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-08-29 14:06:28.376878 | orchestrator | 14:06:28.376 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:28.376903 | orchestrator | 14:06:28.376 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:28.376940 | orchestrator | 14:06:28.376 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.376960 | orchestrator | 14:06:28.376 STDOUT terraform:  + protocol = "tcp" 2025-08-29 14:06:28.376998 | orchestrator | 14:06:28.376 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.377034 | orchestrator | 14:06:28.376 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:28.377070 | orchestrator | 14:06:28.377 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:28.377105 | orchestrator | 14:06:28.377 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 14:06:28.377141 | orchestrator | 14:06:28.377 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:28.377177 | orchestrator | 14:06:28.377 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.377182 | orchestrator | 14:06:28.377 STDOUT terraform:  } 2025-08-29 14:06:28.377237 | orchestrator | 14:06:28.377 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-08-29 14:06:28.377289 | orchestrator | 14:06:28.377 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-08-29 14:06:28.377318 | orchestrator | 14:06:28.377 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:28.377343 | orchestrator | 14:06:28.377 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:28.377381 | orchestrator | 14:06:28.377 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.377406 | orchestrator | 14:06:28.377 STDOUT terraform:  + protocol = "udp" 2025-08-29 14:06:28.377443 | orchestrator | 14:06:28.377 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.377478 | orchestrator | 14:06:28.377 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:28.377522 | orchestrator | 14:06:28.377 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:28.377558 | orchestrator | 14:06:28.377 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 14:06:28.377593 | orchestrator | 14:06:28.377 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:28.377629 | orchestrator | 14:06:28.377 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.377634 | orchestrator | 14:06:28.377 STDOUT terraform:  } 2025-08-29 14:06:28.377689 | orchestrator | 14:06:28.377 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-08-29 14:06:28.377741 | orchestrator | 14:06:28.377 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-08-29 14:06:28.377769 | orchestrator | 14:06:28.377 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:28.377794 | orchestrator | 14:06:28.377 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:28.377830 | orchestrator | 14:06:28.377 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.377855 | orchestrator | 14:06:28.377 STDOUT terraform:  + protocol = "icmp" 2025-08-29 14:06:28.377894 | orchestrator | 14:06:28.377 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.377926 | orchestrator | 14:06:28.377 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:28.377963 | orchestrator | 14:06:28.377 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:28.377992 | orchestrator | 14:06:28.377 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:28.378081 | orchestrator | 14:06:28.377 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:28.382103 | orchestrator | 14:06:28.378 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.382115 | orchestrator | 14:06:28.378 STDOUT terraform:  } 2025-08-29 14:06:28.382120 | orchestrator | 14:06:28.378 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-08-29 14:06:28.382125 | orchestrator | 14:06:28.378 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-08-29 14:06:28.382129 | orchestrator | 14:06:28.378 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:28.382133 | orchestrator | 14:06:28.378 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:28.382137 | orchestrator | 14:06:28.378 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.382141 | orchestrator | 14:06:28.378 STDOUT terraform:  + protocol = "tcp" 2025-08-29 14:06:28.382145 | orchestrator | 14:06:28.378 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.382149 | orchestrator | 14:06:28.378 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:28.382153 | orchestrator | 14:06:28.378 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:28.382157 | orchestrator | 14:06:28.378 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:28.382160 | orchestrator | 14:06:28.378 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:28.382164 | orchestrator | 14:06:28.378 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.382168 | orchestrator | 14:06:28.378 STDOUT terraform:  } 2025-08-29 14:06:28.382172 | orchestrator | 14:06:28.378 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-08-29 14:06:28.382176 | orchestrator | 14:06:28.378 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-08-29 14:06:28.382180 | orchestrator | 14:06:28.378 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:28.382184 | orchestrator | 14:06:28.378 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:28.382187 | orchestrator | 14:06:28.378 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.382191 | orchestrator | 14:06:28.378 STDOUT terraform:  + protocol = "udp" 2025-08-29 14:06:28.382195 | orchestrator | 14:06:28.378 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.382199 | orchestrator | 14:06:28.378 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:28.382210 | orchestrator | 14:06:28.378 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:28.382214 | orchestrator | 14:06:28.378 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:28.382218 | orchestrator | 14:06:28.378 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:28.382222 | orchestrator | 14:06:28.378 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.382226 | orchestrator | 14:06:28.378 STDOUT terraform:  } 2025-08-29 14:06:28.382230 | orchestrator | 14:06:28.378 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-08-29 14:06:28.382234 | orchestrator | 14:06:28.378 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-08-29 14:06:28.382238 | orchestrator | 14:06:28.379 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:28.382242 | orchestrator | 14:06:28.379 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:28.382246 | orchestrator | 14:06:28.379 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.382249 | orchestrator | 14:06:28.379 STDOUT terraform:  + protocol = "icmp" 2025-08-29 14:06:28.382253 | orchestrator | 14:06:28.379 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.382262 | orchestrator | 14:06:28.379 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:28.382266 | orchestrator | 14:06:28.379 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:28.382270 | orchestrator | 14:06:28.379 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:28.382274 | orchestrator | 14:06:28.379 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:28.382277 | orchestrator | 14:06:28.379 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.382281 | orchestrator | 14:06:28.379 STDOUT terraform:  } 2025-08-29 14:06:28.382285 | orchestrator | 14:06:28.379 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-08-29 14:06:28.382289 | orchestrator | 14:06:28.379 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-08-29 14:06:28.382293 | orchestrator | 14:06:28.379 STDOUT terraform:  + description = "vrrp" 2025-08-29 14:06:28.382315 | orchestrator | 14:06:28.379 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:06:28.382320 | orchestrator | 14:06:28.379 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:06:28.382323 | orchestrator | 14:06:28.379 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.382327 | orchestrator | 14:06:28.379 STDOUT terraform:  + protocol = "112" 2025-08-29 14:06:28.382331 | orchestrator | 14:06:28.379 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.382335 | orchestrator | 14:06:28.379 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:06:28.382338 | orchestrator | 14:06:28.379 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:06:28.382342 | orchestrator | 14:06:28.379 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:06:28.382349 | orchestrator | 14:06:28.379 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:06:28.382353 | orchestrator | 14:06:28.379 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.382357 | orchestrator | 14:06:28.379 STDOUT terraform:  } 2025-08-29 14:06:28.382361 | orchestrator | 14:06:28.379 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-08-29 14:06:28.382365 | orchestrator | 14:06:28.379 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-08-29 14:06:28.382369 | orchestrator | 14:06:28.379 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.382373 | orchestrator | 14:06:28.379 STDOUT terraform:  + description = "management security group" 2025-08-29 14:06:28.382376 | orchestrator | 14:06:28.379 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.382380 | orchestrator | 14:06:28.379 STDOUT terraform:  + name = "testbed-management" 2025-08-29 14:06:28.382384 | orchestrator | 14:06:28.379 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.382388 | orchestrator | 14:06:28.379 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 14:06:28.382391 | orchestrator | 14:06:28.379 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.382395 | orchestrator | 14:06:28.379 STDOUT terraform:  } 2025-08-29 14:06:28.382399 | orchestrator | 14:06:28.379 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-08-29 14:06:28.382405 | orchestrator | 14:06:28.379 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-08-29 14:06:28.382409 | orchestrator | 14:06:28.380 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.382413 | orchestrator | 14:06:28.380 STDOUT terraform:  + description = "node security group" 2025-08-29 14:06:28.382417 | orchestrator | 14:06:28.380 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.382420 | orchestrator | 14:06:28.380 STDOUT terraform:  + name = "testbed-node" 2025-08-29 14:06:28.382429 | orchestrator | 14:06:28.380 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.382433 | orchestrator | 14:06:28.380 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 14:06:28.382437 | orchestrator | 14:06:28.380 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.382441 | orchestrator | 14:06:28.380 STDOUT terraform:  } 2025-08-29 14:06:28.382445 | orchestrator | 14:06:28.380 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-08-29 14:06:28.382449 | orchestrator | 14:06:28.380 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-08-29 14:06:28.382453 | orchestrator | 14:06:28.380 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:06:28.382456 | orchestrator | 14:06:28.380 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-08-29 14:06:28.382460 | orchestrator | 14:06:28.380 STDOUT terraform:  + dns_nameservers = [ 2025-08-29 14:06:28.382464 | orchestrator | 14:06:28.380 STDOUT terraform:  + "8.8.8.8", 2025-08-29 14:06:28.382472 | orchestrator | 14:06:28.380 STDOUT terraform:  + "9.9.9.9", 2025-08-29 14:06:28.382476 | orchestrator | 14:06:28.380 STDOUT terraform:  ] 2025-08-29 14:06:28.382480 | orchestrator | 14:06:28.380 STDOUT terraform:  + enable_dhcp = true 2025-08-29 14:06:28.382483 | orchestrator | 14:06:28.380 STDOUT terraform:  + gateway_ip = (known after apply) 2025-08-29 14:06:28.382487 | orchestrator | 14:06:28.380 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.382491 | orchestrator | 14:06:28.380 STDOUT terraform:  + ip_version = 4 2025-08-29 14:06:28.382495 | orchestrator | 14:06:28.380 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-08-29 14:06:28.382498 | orchestrator | 14:06:28.380 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-08-29 14:06:28.382502 | orchestrator | 14:06:28.380 STDOUT terraform:  + name = "subnet-testbed-management" 2025-08-29 14:06:28.382506 | orchestrator | 14:06:28.380 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:06:28.382510 | orchestrator | 14:06:28.380 STDOUT terraform:  + no_gateway = false 2025-08-29 14:06:28.382523 | orchestrator | 14:06:28.380 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:06:28.382528 | orchestrator | 14:06:28.380 STDOUT terraform:  + service_types = (known after apply) 2025-08-29 14:06:28.382531 | orchestrator | 14:06:28.380 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:06:28.382535 | orchestrator | 14:06:28.380 STDOUT terraform:  + allocation_pool { 2025-08-29 14:06:28.382539 | orchestrator | 14:06:28.380 STDOUT terraform:  + end = "192.168.31.250" 2025-08-29 14:06:28.382543 | orchestrator | 14:06:28.380 STDOUT terraform:  + start = "192.168.31.200" 2025-08-29 14:06:28.382547 | orchestrator | 14:06:28.380 STDOUT terraform:  } 2025-08-29 14:06:28.382551 | orchestrator | 14:06:28.380 STDOUT terraform:  } 2025-08-29 14:06:28.382555 | orchestrator | 14:06:28.380 STDOUT terraform:  # terraform_data.image will be created 2025-08-29 14:06:28.382558 | orchestrator | 14:06:28.380 STDOUT terraform:  + resource "terraform_data" "image" { 2025-08-29 14:06:28.382562 | orchestrator | 14:06:28.380 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.382566 | orchestrator | 14:06:28.380 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 14:06:28.382570 | orchestrator | 14:06:28.380 STDOUT terraform:  + output = (known after apply) 2025-08-29 14:06:28.382573 | orchestrator | 14:06:28.380 STDOUT terraform:  } 2025-08-29 14:06:28.382582 | orchestrator | 14:06:28.380 STDOUT terraform:  # terraform_data.image_node will be created 2025-08-29 14:06:28.382585 | orchestrator | 14:06:28.380 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-08-29 14:06:28.382589 | orchestrator | 14:06:28.380 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:06:28.382593 | orchestrator | 14:06:28.380 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 14:06:28.382597 | orchestrator | 14:06:28.380 STDOUT terraform:  + output = (known after apply) 2025-08-29 14:06:28.382601 | orchestrator | 14:06:28.380 STDOUT terraform:  } 2025-08-29 14:06:28.382606 | orchestrator | 14:06:28.380 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-08-29 14:06:28.382614 | orchestrator | 14:06:28.380 STDOUT terraform: Changes to Outputs: 2025-08-29 14:06:28.382617 | orchestrator | 14:06:28.380 STDOUT terraform:  + manager_address = (sensitive value) 2025-08-29 14:06:28.382621 | orchestrator | 14:06:28.380 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 14:06:28.545291 | orchestrator | 14:06:28.545 STDOUT terraform: terraform_data.image: Creating... 2025-08-29 14:06:28.545362 | orchestrator | 14:06:28.545 STDOUT terraform: terraform_data.image_node: Creating... 2025-08-29 14:06:28.545585 | orchestrator | 14:06:28.545 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=7248d835-b51b-50bf-c638-0af324441987] 2025-08-29 14:06:28.545597 | orchestrator | 14:06:28.545 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=2abbbdc1-3652-0ac4-c824-7fb4f5c6f69c] 2025-08-29 14:06:28.567504 | orchestrator | 14:06:28.567 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-08-29 14:06:28.567595 | orchestrator | 14:06:28.567 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-08-29 14:06:28.586328 | orchestrator | 14:06:28.585 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-08-29 14:06:28.587681 | orchestrator | 14:06:28.587 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-08-29 14:06:28.603783 | orchestrator | 14:06:28.603 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-08-29 14:06:28.603839 | orchestrator | 14:06:28.603 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-08-29 14:06:28.603845 | orchestrator | 14:06:28.603 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-08-29 14:06:28.606305 | orchestrator | 14:06:28.605 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-08-29 14:06:28.611826 | orchestrator | 14:06:28.611 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-08-29 14:06:28.613021 | orchestrator | 14:06:28.612 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-08-29 14:06:29.036238 | orchestrator | 14:06:29.036 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 14:06:29.044823 | orchestrator | 14:06:29.044 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 14:06:29.046305 | orchestrator | 14:06:29.046 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-08-29 14:06:29.049601 | orchestrator | 14:06:29.049 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-08-29 14:06:29.226618 | orchestrator | 14:06:29.226 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-08-29 14:06:29.232695 | orchestrator | 14:06:29.232 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-08-29 14:06:29.620816 | orchestrator | 14:06:29.620 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=356f90d6-d6e9-4e9f-99af-4849bd525486] 2025-08-29 14:06:29.626142 | orchestrator | 14:06:29.625 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-08-29 14:06:32.206266 | orchestrator | 14:06:32.205 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=d88be871-1de9-4c4e-96cc-2f99c6f9bcd6] 2025-08-29 14:06:32.843490 | orchestrator | 14:06:32.214 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-08-29 14:06:32.843567 | orchestrator | 14:06:32.234 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=8530debd-d017-4f26-8837-9c6ea90d3888] 2025-08-29 14:06:32.843574 | orchestrator | 14:06:32.241 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-08-29 14:06:32.843578 | orchestrator | 14:06:32.248 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=994268b6-638b-49ae-9337-a5a883f2caf6] 2025-08-29 14:06:32.843582 | orchestrator | 14:06:32.258 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=4ecbb96d-8085-4234-aac0-aef459b35ca9] 2025-08-29 14:06:32.843586 | orchestrator | 14:06:32.259 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-08-29 14:06:32.843590 | orchestrator | 14:06:32.261 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-08-29 14:06:32.843594 | orchestrator | 14:06:32.262 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=e2b81981-b087-421f-a1f1-ab20210f7cdd] 2025-08-29 14:06:32.843598 | orchestrator | 14:06:32.267 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-08-29 14:06:32.843601 | orchestrator | 14:06:32.285 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=b5eca971-d360-4d10-a7ea-637f4b5fbeee] 2025-08-29 14:06:32.843605 | orchestrator | 14:06:32.289 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-08-29 14:06:32.843609 | orchestrator | 14:06:32.310 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=e4cd25e5-e70d-493f-a3e8-ae6027cdfc98] 2025-08-29 14:06:32.843613 | orchestrator | 14:06:32.325 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-08-29 14:06:32.843618 | orchestrator | 14:06:32.325 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=2af32ac1-7951-4112-a429-d0343cb67ad9] 2025-08-29 14:06:32.843622 | orchestrator | 14:06:32.334 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-08-29 14:06:32.843626 | orchestrator | 14:06:32.430 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=cfe2d7b1-468c-4925-a2ef-e57e3e9904a6] 2025-08-29 14:06:32.843630 | orchestrator | 14:06:32.437 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-08-29 14:06:32.852722 | orchestrator | 14:06:32.852 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 1s [id=79c75ff9fd615d4bf1578ece2af1bae0bc6746d9] 2025-08-29 14:06:32.855227 | orchestrator | 14:06:32.855 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 1s [id=684be17ab3b73b1f2689f9cff636bdb4ff1d1c6e] 2025-08-29 14:06:32.977882 | orchestrator | 14:06:32.977 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=3093a418-fd9e-4941-a2b1-e8ddc6c5f50e] 2025-08-29 14:06:33.347047 | orchestrator | 14:06:33.345 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=19174d9a-9aec-472b-b80d-bf91a97ba314] 2025-08-29 14:06:33.351727 | orchestrator | 14:06:33.351 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-08-29 14:06:35.593217 | orchestrator | 14:06:35.592 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=64be458b-ccc0-4bb0-97ba-3c13881a6e5a] 2025-08-29 14:06:35.606178 | orchestrator | 14:06:35.605 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=b745d13d-bb5e-416e-bb08-91f06edea026] 2025-08-29 14:06:35.657228 | orchestrator | 14:06:35.656 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=5911b2dd-dfb9-4225-b72b-248290496bdf] 2025-08-29 14:06:35.683364 | orchestrator | 14:06:35.683 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=ef226c67-7d86-4d5e-a378-f55055673820] 2025-08-29 14:06:35.704041 | orchestrator | 14:06:35.703 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=f87c7dfb-0134-4ec7-a213-44c56380f9ae] 2025-08-29 14:06:35.710159 | orchestrator | 14:06:35.709 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=02781145-d1c2-4e4e-a6de-55bca7cba69d] 2025-08-29 14:06:36.546077 | orchestrator | 14:06:36.545 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 4s [id=9fb2af3b-fc94-4755-8ce7-6aa9639258b7] 2025-08-29 14:06:36.552247 | orchestrator | 14:06:36.551 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-08-29 14:06:36.552895 | orchestrator | 14:06:36.552 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-08-29 14:06:36.556781 | orchestrator | 14:06:36.556 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-08-29 14:06:36.767099 | orchestrator | 14:06:36.766 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=3f82af6c-6c9a-46a8-8373-d8cf12d31e00] 2025-08-29 14:06:36.782703 | orchestrator | 14:06:36.782 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-08-29 14:06:36.786473 | orchestrator | 14:06:36.786 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-08-29 14:06:36.786654 | orchestrator | 14:06:36.786 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-08-29 14:06:36.788206 | orchestrator | 14:06:36.788 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-08-29 14:06:36.793693 | orchestrator | 14:06:36.793 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-08-29 14:06:36.796204 | orchestrator | 14:06:36.796 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-08-29 14:06:36.798210 | orchestrator | 14:06:36.798 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=e0e30d97-cc33-4186-81ff-4a619299afab] 2025-08-29 14:06:36.799476 | orchestrator | 14:06:36.799 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-08-29 14:06:36.800684 | orchestrator | 14:06:36.800 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-08-29 14:06:36.804797 | orchestrator | 14:06:36.804 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-08-29 14:06:37.078163 | orchestrator | 14:06:37.077 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=32dbb8e5-8bb1-4026-8f2e-57f693fcb08b] 2025-08-29 14:06:37.384872 | orchestrator | 14:06:37.097 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-08-29 14:06:37.384946 | orchestrator | 14:06:37.324 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=7ce57936-de3b-4ebc-bd22-b9439554991a] 2025-08-29 14:06:37.384962 | orchestrator | 14:06:37.335 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-08-29 14:06:37.388582 | orchestrator | 14:06:37.388 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=82936777-00bc-48e0-8b3b-3b850b053147] 2025-08-29 14:06:37.393286 | orchestrator | 14:06:37.393 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-08-29 14:06:37.501734 | orchestrator | 14:06:37.501 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=d8918caf-073c-4f7d-ba1b-0be702b30985] 2025-08-29 14:06:37.513739 | orchestrator | 14:06:37.513 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-08-29 14:06:37.529656 | orchestrator | 14:06:37.529 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=41d039dc-7a30-4429-b317-b2462eb64ee2] 2025-08-29 14:06:37.533906 | orchestrator | 14:06:37.533 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-08-29 14:06:37.547792 | orchestrator | 14:06:37.547 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=33f6bfca-66ac-4927-8c8d-3fe2a118906c] 2025-08-29 14:06:37.554110 | orchestrator | 14:06:37.553 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=3419a9f7-c7ba-45d2-84e8-b0ddfd7cecd8] 2025-08-29 14:06:37.559403 | orchestrator | 14:06:37.556 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-08-29 14:06:37.565871 | orchestrator | 14:06:37.565 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-08-29 14:06:37.691743 | orchestrator | 14:06:37.691 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=265d06a1-867c-44a6-bfe7-80cca51a46a9] 2025-08-29 14:06:37.706559 | orchestrator | 14:06:37.706 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=d3a7c8ac-9b74-41e3-97b0-4592185ab5f0] 2025-08-29 14:06:37.727737 | orchestrator | 14:06:37.727 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=2035f102-4145-40a0-8070-e5968b045d32] 2025-08-29 14:06:37.847424 | orchestrator | 14:06:37.847 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=88244ebf-c29c-4db1-b9ca-15c84f0a0d92] 2025-08-29 14:06:37.896388 | orchestrator | 14:06:37.896 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=0b521db5-cace-41f1-8115-6fc0bfe81c3a] 2025-08-29 14:06:38.006559 | orchestrator | 14:06:38.006 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=7aefe871-28d4-4b1c-b6b7-51a572a1f5dc] 2025-08-29 14:06:38.047981 | orchestrator | 14:06:38.047 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=816dac77-4d1f-4f5d-a1f7-96e54de668df] 2025-08-29 14:06:38.060258 | orchestrator | 14:06:38.059 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=162b5293-e52d-486a-bc71-a83ec967318e] 2025-08-29 14:06:38.219731 | orchestrator | 14:06:38.219 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=9ae56d39-7120-457f-b89e-3ab50f6a9dbe] 2025-08-29 14:06:38.631269 | orchestrator | 14:06:38.631 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=aaef5167-7fa4-41a2-85b7-d12fceecd7cc] 2025-08-29 14:06:38.652616 | orchestrator | 14:06:38.652 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-08-29 14:06:38.655361 | orchestrator | 14:06:38.655 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-08-29 14:06:38.667117 | orchestrator | 14:06:38.667 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-08-29 14:06:38.671022 | orchestrator | 14:06:38.670 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-08-29 14:06:38.672550 | orchestrator | 14:06:38.672 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-08-29 14:06:38.672994 | orchestrator | 14:06:38.672 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-08-29 14:06:38.674332 | orchestrator | 14:06:38.674 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-08-29 14:06:40.549179 | orchestrator | 14:06:40.548 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=f9268818-d31e-40c1-b7d7-c5ab06427395] 2025-08-29 14:06:40.562187 | orchestrator | 14:06:40.561 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-08-29 14:06:40.566753 | orchestrator | 14:06:40.566 STDOUT terraform: local_file.inventory: Creating... 2025-08-29 14:06:40.567611 | orchestrator | 14:06:40.567 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-08-29 14:06:40.571791 | orchestrator | 14:06:40.571 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=891aed934a9d454022103a30c70614c2a36dcb38] 2025-08-29 14:06:40.572286 | orchestrator | 14:06:40.572 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=bc9b7dcfb0732c617919513b69c1df7231ce35a1] 2025-08-29 14:06:41.296018 | orchestrator | 14:06:41.295 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=f9268818-d31e-40c1-b7d7-c5ab06427395] 2025-08-29 14:06:48.658598 | orchestrator | 14:06:48.658 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-08-29 14:06:48.668015 | orchestrator | 14:06:48.667 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-08-29 14:06:48.672284 | orchestrator | 14:06:48.672 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-08-29 14:06:48.675551 | orchestrator | 14:06:48.675 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-08-29 14:06:48.675680 | orchestrator | 14:06:48.675 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-08-29 14:06:48.682806 | orchestrator | 14:06:48.682 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-08-29 14:06:58.658991 | orchestrator | 14:06:58.658 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-08-29 14:06:58.668196 | orchestrator | 14:06:58.667 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-08-29 14:06:58.673419 | orchestrator | 14:06:58.673 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-08-29 14:06:58.676671 | orchestrator | 14:06:58.676 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-08-29 14:06:58.676803 | orchestrator | 14:06:58.676 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-08-29 14:06:58.682933 | orchestrator | 14:06:58.682 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-08-29 14:06:59.725427 | orchestrator | 14:06:59.725 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=43199061-cde4-4098-b632-0ebdb0505d37] 2025-08-29 14:07:00.344950 | orchestrator | 14:07:00.344 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=c040f226-325b-4733-99c8-f0d218c9908b] 2025-08-29 14:07:00.453136 | orchestrator | 14:07:00.452 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=6d41cac3-6411-4eca-af2a-8ba77c51dbe8] 2025-08-29 14:07:08.660654 | orchestrator | 14:07:08.660 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-08-29 14:07:08.668822 | orchestrator | 14:07:08.668 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-08-29 14:07:08.683923 | orchestrator | 14:07:08.683 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-08-29 14:07:10.074919 | orchestrator | 14:07:10.074 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=ae043351-bdd4-468e-b789-41807a16be92] 2025-08-29 14:07:10.141574 | orchestrator | 14:07:10.141 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=177612ab-ae66-4b4e-a42b-a7cc316f9f57] 2025-08-29 14:07:10.909383 | orchestrator | 14:07:10.909 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 32s [id=c801d623-391b-410d-b019-c6755571e645] 2025-08-29 14:07:10.939628 | orchestrator | 14:07:10.939 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-08-29 14:07:10.942108 | orchestrator | 14:07:10.941 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-08-29 14:07:10.944352 | orchestrator | 14:07:10.944 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=7833978036120755557] 2025-08-29 14:07:10.954424 | orchestrator | 14:07:10.953 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-08-29 14:07:10.973919 | orchestrator | 14:07:10.973 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-08-29 14:07:10.973973 | orchestrator | 14:07:10.973 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-08-29 14:07:10.976443 | orchestrator | 14:07:10.976 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-08-29 14:07:10.981073 | orchestrator | 14:07:10.980 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-08-29 14:07:10.986073 | orchestrator | 14:07:10.984 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-08-29 14:07:10.986128 | orchestrator | 14:07:10.985 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-08-29 14:07:10.988583 | orchestrator | 14:07:10.988 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-08-29 14:07:11.010235 | orchestrator | 14:07:11.010 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-08-29 14:07:14.368257 | orchestrator | 14:07:14.367 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=177612ab-ae66-4b4e-a42b-a7cc316f9f57/994268b6-638b-49ae-9337-a5a883f2caf6] 2025-08-29 14:07:14.369248 | orchestrator | 14:07:14.369 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=c040f226-325b-4733-99c8-f0d218c9908b/d88be871-1de9-4c4e-96cc-2f99c6f9bcd6] 2025-08-29 14:07:14.405013 | orchestrator | 14:07:14.404 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=ae043351-bdd4-468e-b789-41807a16be92/e4cd25e5-e70d-493f-a3e8-ae6027cdfc98] 2025-08-29 14:07:20.512604 | orchestrator | 14:07:20.512 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=c040f226-325b-4733-99c8-f0d218c9908b/4ecbb96d-8085-4234-aac0-aef459b35ca9] 2025-08-29 14:07:20.974600 | orchestrator | 14:07:20.974 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-08-29 14:07:20.978862 | orchestrator | 14:07:20.978 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Still creating... [10s elapsed] 2025-08-29 14:07:20.985961 | orchestrator | 14:07:20.985 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Still creating... [10s elapsed] 2025-08-29 14:07:20.987168 | orchestrator | 14:07:20.987 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Still creating... [10s elapsed] 2025-08-29 14:07:20.992422 | orchestrator | 14:07:20.992 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Still creating... [10s elapsed] 2025-08-29 14:07:20.997719 | orchestrator | 14:07:20.997 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Still creating... [10s elapsed] 2025-08-29 14:07:23.263981 | orchestrator | 14:07:23.263 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 12s [id=177612ab-ae66-4b4e-a42b-a7cc316f9f57/8530debd-d017-4f26-8837-9c6ea90d3888] 2025-08-29 14:07:23.264200 | orchestrator | 14:07:23.263 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 12s [id=ae043351-bdd4-468e-b789-41807a16be92/2af32ac1-7951-4112-a429-d0343cb67ad9] 2025-08-29 14:07:23.264633 | orchestrator | 14:07:23.264 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 12s [id=177612ab-ae66-4b4e-a42b-a7cc316f9f57/b5eca971-d360-4d10-a7ea-637f4b5fbeee] 2025-08-29 14:07:23.264894 | orchestrator | 14:07:23.264 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 12s [id=ae043351-bdd4-468e-b789-41807a16be92/cfe2d7b1-468c-4925-a2ef-e57e3e9904a6] 2025-08-29 14:07:23.265788 | orchestrator | 14:07:23.265 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 12s [id=c040f226-325b-4733-99c8-f0d218c9908b/e2b81981-b087-421f-a1f1-ab20210f7cdd] 2025-08-29 14:07:30.978445 | orchestrator | 14:07:30.978 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-08-29 14:07:31.422983 | orchestrator | 14:07:31.422 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=0b8409e1-a6f3-4a03-90b8-4d24e3f87cee] 2025-08-29 14:07:31.441534 | orchestrator | 14:07:31.441 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-08-29 14:07:31.441598 | orchestrator | 14:07:31.441 STDOUT terraform: Outputs: 2025-08-29 14:07:31.441679 | orchestrator | 14:07:31.441 STDOUT terraform: manager_address = 2025-08-29 14:07:31.441728 | orchestrator | 14:07:31.441 STDOUT terraform: private_key = 2025-08-29 14:07:31.933167 | orchestrator | ok: Runtime: 0:01:09.542871 2025-08-29 14:07:31.959062 | 2025-08-29 14:07:31.959172 | TASK [Create infrastructure (stable)] 2025-08-29 14:07:32.491060 | orchestrator | skipping: Conditional result was False 2025-08-29 14:07:32.509671 | 2025-08-29 14:07:32.509838 | TASK [Fetch manager address] 2025-08-29 14:07:32.984864 | orchestrator | ok 2025-08-29 14:07:32.994641 | 2025-08-29 14:07:32.994752 | TASK [Set manager_host address] 2025-08-29 14:07:33.068240 | orchestrator | ok 2025-08-29 14:07:33.075332 | 2025-08-29 14:07:33.075533 | LOOP [Update ansible collections] 2025-08-29 14:07:37.226913 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:07:37.227293 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 14:07:37.227345 | orchestrator | Starting galaxy collection install process 2025-08-29 14:07:37.227380 | orchestrator | Process install dependency map 2025-08-29 14:07:37.227411 | orchestrator | Starting collection install process 2025-08-29 14:07:37.227441 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-08-29 14:07:37.227493 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-08-29 14:07:37.227527 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-08-29 14:07:37.227592 | orchestrator | ok: Item: commons Runtime: 0:00:03.840119 2025-08-29 14:07:39.844031 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:07:39.844206 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 14:07:39.844259 | orchestrator | Starting galaxy collection install process 2025-08-29 14:07:39.844299 | orchestrator | Process install dependency map 2025-08-29 14:07:39.844336 | orchestrator | Starting collection install process 2025-08-29 14:07:39.844371 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-08-29 14:07:39.844406 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-08-29 14:07:39.844439 | orchestrator | osism.services:999.0.0 was installed successfully 2025-08-29 14:07:39.844530 | orchestrator | ok: Item: services Runtime: 0:00:02.356104 2025-08-29 14:07:39.861648 | 2025-08-29 14:07:39.861765 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 14:07:50.395681 | orchestrator | ok 2025-08-29 14:07:50.408201 | 2025-08-29 14:07:50.408353 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 14:08:50.467719 | orchestrator | ok 2025-08-29 14:08:50.480050 | 2025-08-29 14:08:50.480176 | TASK [Fetch manager ssh hostkey] 2025-08-29 14:08:52.054766 | orchestrator | Output suppressed because no_log was given 2025-08-29 14:08:52.070383 | 2025-08-29 14:08:52.070614 | TASK [Get ssh keypair from terraform environment] 2025-08-29 14:08:52.642669 | orchestrator | ok: Runtime: 0:00:00.012206 2025-08-29 14:08:52.652733 | 2025-08-29 14:08:52.652859 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 14:08:52.701109 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-08-29 14:08:52.711177 | 2025-08-29 14:08:52.711307 | TASK [Run manager part 0] 2025-08-29 14:08:54.467680 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:08:54.637100 | orchestrator | 2025-08-29 14:08:54.637185 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-08-29 14:08:54.637200 | orchestrator | 2025-08-29 14:08:54.637227 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-08-29 14:08:56.504674 | orchestrator | ok: [testbed-manager] 2025-08-29 14:08:56.504717 | orchestrator | 2025-08-29 14:08:56.504734 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 14:08:56.504743 | orchestrator | 2025-08-29 14:08:56.504751 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:08:58.924383 | orchestrator | ok: [testbed-manager] 2025-08-29 14:08:58.924471 | orchestrator | 2025-08-29 14:08:58.924484 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 14:08:59.631784 | orchestrator | ok: [testbed-manager] 2025-08-29 14:08:59.631829 | orchestrator | 2025-08-29 14:08:59.631837 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 14:08:59.688017 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:08:59.688062 | orchestrator | 2025-08-29 14:08:59.688070 | orchestrator | TASK [Update package cache] **************************************************** 2025-08-29 14:08:59.732100 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:08:59.732140 | orchestrator | 2025-08-29 14:08:59.732148 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 14:08:59.775669 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:08:59.775708 | orchestrator | 2025-08-29 14:08:59.775714 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 14:08:59.814465 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:08:59.814506 | orchestrator | 2025-08-29 14:08:59.814511 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 14:08:59.842161 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:08:59.842188 | orchestrator | 2025-08-29 14:08:59.842198 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-08-29 14:08:59.874463 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:08:59.874574 | orchestrator | 2025-08-29 14:08:59.874583 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-08-29 14:08:59.916015 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:08:59.916052 | orchestrator | 2025-08-29 14:08:59.916059 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-08-29 14:09:00.738876 | orchestrator | changed: [testbed-manager] 2025-08-29 14:09:00.738923 | orchestrator | 2025-08-29 14:09:00.738930 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-08-29 14:11:35.838562 | orchestrator | changed: [testbed-manager] 2025-08-29 14:11:35.838631 | orchestrator | 2025-08-29 14:11:35.838641 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 14:12:51.811105 | orchestrator | changed: [testbed-manager] 2025-08-29 14:12:51.811197 | orchestrator | 2025-08-29 14:12:51.811211 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 14:13:23.298447 | orchestrator | changed: [testbed-manager] 2025-08-29 14:13:23.298630 | orchestrator | 2025-08-29 14:13:23.298651 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 14:13:32.077372 | orchestrator | changed: [testbed-manager] 2025-08-29 14:13:32.077486 | orchestrator | 2025-08-29 14:13:32.077504 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 14:13:32.122917 | orchestrator | ok: [testbed-manager] 2025-08-29 14:13:32.123029 | orchestrator | 2025-08-29 14:13:32.123046 | orchestrator | TASK [Get current user] ******************************************************** 2025-08-29 14:13:32.945862 | orchestrator | ok: [testbed-manager] 2025-08-29 14:13:32.945901 | orchestrator | 2025-08-29 14:13:32.945910 | orchestrator | TASK [Create venv directory] *************************************************** 2025-08-29 14:13:33.661140 | orchestrator | changed: [testbed-manager] 2025-08-29 14:13:33.661229 | orchestrator | 2025-08-29 14:13:33.661245 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-08-29 14:13:39.853632 | orchestrator | changed: [testbed-manager] 2025-08-29 14:13:39.853711 | orchestrator | 2025-08-29 14:13:39.853750 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-08-29 14:13:45.819249 | orchestrator | changed: [testbed-manager] 2025-08-29 14:13:45.819340 | orchestrator | 2025-08-29 14:13:45.819358 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-08-29 14:13:48.330792 | orchestrator | changed: [testbed-manager] 2025-08-29 14:13:48.332238 | orchestrator | 2025-08-29 14:13:48.332266 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-08-29 14:13:50.072001 | orchestrator | changed: [testbed-manager] 2025-08-29 14:13:50.072096 | orchestrator | 2025-08-29 14:13:50.072111 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-08-29 14:13:51.195120 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 14:13:51.195782 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 14:13:51.195808 | orchestrator | 2025-08-29 14:13:51.195821 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-08-29 14:13:51.234697 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 14:13:51.234769 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 14:13:51.234783 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 14:13:51.234796 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 14:14:02.249308 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 14:14:02.249484 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 14:14:02.249493 | orchestrator | 2025-08-29 14:14:02.249499 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-08-29 14:14:02.808865 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:02.808903 | orchestrator | 2025-08-29 14:14:02.808909 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-08-29 14:14:23.418943 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-08-29 14:14:23.419141 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-08-29 14:14:23.419162 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-08-29 14:14:23.419175 | orchestrator | 2025-08-29 14:14:23.419188 | orchestrator | TASK [Install local collections] *********************************************** 2025-08-29 14:14:25.717571 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-08-29 14:14:25.717608 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-08-29 14:14:25.717613 | orchestrator | 2025-08-29 14:14:25.717618 | orchestrator | PLAY [Create operator user] **************************************************** 2025-08-29 14:14:25.717623 | orchestrator | 2025-08-29 14:14:25.717627 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:14:27.129174 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:27.129211 | orchestrator | 2025-08-29 14:14:27.129218 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 14:14:27.189250 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:27.189325 | orchestrator | 2025-08-29 14:14:27.189339 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 14:14:27.275070 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:27.275132 | orchestrator | 2025-08-29 14:14:27.275139 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 14:14:28.041976 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:28.042127 | orchestrator | 2025-08-29 14:14:28.042144 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 14:14:28.760601 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:28.760729 | orchestrator | 2025-08-29 14:14:28.760746 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 14:14:30.144535 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-08-29 14:14:30.144607 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-08-29 14:14:30.144616 | orchestrator | 2025-08-29 14:14:30.144647 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 14:14:31.501953 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:31.502099 | orchestrator | 2025-08-29 14:14:31.502120 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 14:14:33.283211 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:14:33.283308 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-08-29 14:14:33.283323 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:14:33.283337 | orchestrator | 2025-08-29 14:14:33.283350 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 14:14:33.343191 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:14:33.343271 | orchestrator | 2025-08-29 14:14:33.343286 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 14:14:33.907321 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:33.907435 | orchestrator | 2025-08-29 14:14:33.907453 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 14:14:33.990935 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:14:33.990983 | orchestrator | 2025-08-29 14:14:33.990989 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 14:14:34.874246 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:14:34.874334 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:34.874347 | orchestrator | 2025-08-29 14:14:34.874358 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 14:14:34.908950 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:14:34.909072 | orchestrator | 2025-08-29 14:14:34.909091 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 14:14:34.936785 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:14:34.936825 | orchestrator | 2025-08-29 14:14:34.936834 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 14:14:34.980938 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:14:34.981006 | orchestrator | 2025-08-29 14:14:34.981020 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 14:14:35.037312 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:14:35.037429 | orchestrator | 2025-08-29 14:14:35.037442 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 14:14:35.780491 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:35.780692 | orchestrator | 2025-08-29 14:14:35.780712 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 14:14:35.780725 | orchestrator | 2025-08-29 14:14:35.780736 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:14:37.179032 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:37.179120 | orchestrator | 2025-08-29 14:14:37.179137 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-08-29 14:14:38.119039 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:38.119128 | orchestrator | 2025-08-29 14:14:38.119145 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:14:38.119160 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 14:14:38.119173 | orchestrator | 2025-08-29 14:14:38.448113 | orchestrator | ok: Runtime: 0:05:45.205829 2025-08-29 14:14:38.466421 | 2025-08-29 14:14:38.466574 | TASK [Point out that the log in on the manager is now possible] 2025-08-29 14:14:38.513143 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-08-29 14:14:38.523248 | 2025-08-29 14:14:38.523389 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 14:14:38.559743 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-08-29 14:14:38.568773 | 2025-08-29 14:14:38.568892 | TASK [Run manager part 1 + 2] 2025-08-29 14:14:40.264387 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:14:40.361252 | orchestrator | 2025-08-29 14:14:40.361324 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-08-29 14:14:40.361339 | orchestrator | 2025-08-29 14:14:40.361402 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:14:42.826699 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:42.826930 | orchestrator | 2025-08-29 14:14:42.826992 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 14:14:42.864959 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:14:42.865047 | orchestrator | 2025-08-29 14:14:42.865067 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 14:14:42.918487 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:42.918559 | orchestrator | 2025-08-29 14:14:42.918573 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 14:14:42.961683 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:42.961728 | orchestrator | 2025-08-29 14:14:42.961735 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 14:14:43.025339 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:43.025451 | orchestrator | 2025-08-29 14:14:43.025470 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 14:14:43.094548 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:43.094599 | orchestrator | 2025-08-29 14:14:43.094606 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 14:14:43.144018 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-08-29 14:14:43.144074 | orchestrator | 2025-08-29 14:14:43.144083 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 14:14:43.862312 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:43.862416 | orchestrator | 2025-08-29 14:14:43.862436 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 14:14:43.912629 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:14:43.912712 | orchestrator | 2025-08-29 14:14:43.912728 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 14:14:45.273710 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:45.273803 | orchestrator | 2025-08-29 14:14:45.273822 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 14:14:45.840719 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:45.840789 | orchestrator | 2025-08-29 14:14:45.840801 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 14:14:46.924874 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:46.924935 | orchestrator | 2025-08-29 14:14:46.924952 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 14:15:03.328502 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:03.328539 | orchestrator | 2025-08-29 14:15:03.328546 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 14:15:04.001459 | orchestrator | ok: [testbed-manager] 2025-08-29 14:15:04.001541 | orchestrator | 2025-08-29 14:15:04.001558 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 14:15:04.056923 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:15:04.056978 | orchestrator | 2025-08-29 14:15:04.056983 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-08-29 14:15:05.011101 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:05.011184 | orchestrator | 2025-08-29 14:15:05.011200 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-08-29 14:15:05.957791 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:05.957876 | orchestrator | 2025-08-29 14:15:05.957897 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-08-29 14:15:06.520930 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:06.521015 | orchestrator | 2025-08-29 14:15:06.521031 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-08-29 14:15:06.564778 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 14:15:06.564882 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 14:15:06.564897 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 14:15:06.564909 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 14:15:13.292853 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:13.292934 | orchestrator | 2025-08-29 14:15:13.292949 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-08-29 14:15:21.749901 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-08-29 14:15:21.749944 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-08-29 14:15:21.749952 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-08-29 14:15:21.749958 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-08-29 14:15:21.749968 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-08-29 14:15:21.749974 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-08-29 14:15:21.749980 | orchestrator | 2025-08-29 14:15:21.749987 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-08-29 14:15:22.814084 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:22.814204 | orchestrator | 2025-08-29 14:15:22.814221 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-08-29 14:15:22.857589 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:15:22.857702 | orchestrator | 2025-08-29 14:15:22.857720 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-08-29 14:15:26.001160 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:26.001248 | orchestrator | 2025-08-29 14:15:26.001265 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-08-29 14:15:26.041837 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:15:26.041909 | orchestrator | 2025-08-29 14:15:26.041925 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-08-29 14:16:57.788569 | orchestrator | changed: [testbed-manager] 2025-08-29 14:16:57.788666 | orchestrator | 2025-08-29 14:16:57.788686 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 14:16:58.934595 | orchestrator | ok: [testbed-manager] 2025-08-29 14:16:58.934635 | orchestrator | 2025-08-29 14:16:58.934643 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:16:58.934651 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-08-29 14:16:58.934657 | orchestrator | 2025-08-29 14:16:59.210115 | orchestrator | ok: Runtime: 0:02:20.147956 2025-08-29 14:16:59.229550 | 2025-08-29 14:16:59.229742 | TASK [Reboot manager] 2025-08-29 14:17:00.766687 | orchestrator | ok: Runtime: 0:00:00.962064 2025-08-29 14:17:00.783852 | 2025-08-29 14:17:00.784029 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 14:17:15.438039 | orchestrator | ok 2025-08-29 14:17:15.449045 | 2025-08-29 14:17:15.449180 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 14:18:15.506326 | orchestrator | ok 2025-08-29 14:18:15.516655 | 2025-08-29 14:18:15.516785 | TASK [Deploy manager + bootstrap nodes] 2025-08-29 14:18:17.962613 | orchestrator | 2025-08-29 14:18:17.962793 | orchestrator | # DEPLOY MANAGER 2025-08-29 14:18:17.962817 | orchestrator | 2025-08-29 14:18:17.962831 | orchestrator | + set -e 2025-08-29 14:18:17.962844 | orchestrator | + echo 2025-08-29 14:18:17.962858 | orchestrator | + echo '# DEPLOY MANAGER' 2025-08-29 14:18:17.962874 | orchestrator | + echo 2025-08-29 14:18:17.962924 | orchestrator | + cat /opt/manager-vars.sh 2025-08-29 14:18:17.966190 | orchestrator | export NUMBER_OF_NODES=6 2025-08-29 14:18:17.966229 | orchestrator | 2025-08-29 14:18:17.966241 | orchestrator | export CEPH_VERSION=reef 2025-08-29 14:18:17.966254 | orchestrator | export CONFIGURATION_VERSION=main 2025-08-29 14:18:17.966267 | orchestrator | export MANAGER_VERSION=latest 2025-08-29 14:18:17.966292 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-08-29 14:18:17.966303 | orchestrator | 2025-08-29 14:18:17.966321 | orchestrator | export ARA=false 2025-08-29 14:18:17.966333 | orchestrator | export DEPLOY_MODE=manager 2025-08-29 14:18:17.966351 | orchestrator | export TEMPEST=false 2025-08-29 14:18:17.966362 | orchestrator | export IS_ZUUL=true 2025-08-29 14:18:17.966373 | orchestrator | 2025-08-29 14:18:17.966391 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.249 2025-08-29 14:18:17.966402 | orchestrator | export EXTERNAL_API=false 2025-08-29 14:18:17.966413 | orchestrator | 2025-08-29 14:18:17.966424 | orchestrator | export IMAGE_USER=ubuntu 2025-08-29 14:18:17.966437 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-08-29 14:18:17.966448 | orchestrator | 2025-08-29 14:18:17.966459 | orchestrator | export CEPH_STACK=ceph-ansible 2025-08-29 14:18:17.966552 | orchestrator | 2025-08-29 14:18:17.966568 | orchestrator | + echo 2025-08-29 14:18:17.966580 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 14:18:17.967769 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 14:18:17.967789 | orchestrator | ++ INTERACTIVE=false 2025-08-29 14:18:17.967801 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 14:18:17.967813 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 14:18:17.967828 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 14:18:17.967839 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 14:18:17.967850 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 14:18:17.967861 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 14:18:17.968008 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 14:18:17.968023 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 14:18:17.968034 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 14:18:17.968045 | orchestrator | ++ export MANAGER_VERSION=latest 2025-08-29 14:18:17.968055 | orchestrator | ++ MANAGER_VERSION=latest 2025-08-29 14:18:17.968066 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 14:18:17.968086 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 14:18:17.968097 | orchestrator | ++ export ARA=false 2025-08-29 14:18:17.968125 | orchestrator | ++ ARA=false 2025-08-29 14:18:17.968136 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 14:18:17.968146 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 14:18:17.968157 | orchestrator | ++ export TEMPEST=false 2025-08-29 14:18:17.968168 | orchestrator | ++ TEMPEST=false 2025-08-29 14:18:17.968179 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 14:18:17.968189 | orchestrator | ++ IS_ZUUL=true 2025-08-29 14:18:17.968200 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.249 2025-08-29 14:18:17.968211 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.249 2025-08-29 14:18:17.968222 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 14:18:17.968233 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 14:18:17.968243 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 14:18:17.968254 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 14:18:17.968265 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 14:18:17.968276 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 14:18:17.968287 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 14:18:17.968297 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 14:18:17.968309 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-08-29 14:18:18.022725 | orchestrator | + docker version 2025-08-29 14:18:18.288184 | orchestrator | Client: Docker Engine - Community 2025-08-29 14:18:18.288270 | orchestrator | Version: 27.5.1 2025-08-29 14:18:18.288285 | orchestrator | API version: 1.47 2025-08-29 14:18:18.288297 | orchestrator | Go version: go1.22.11 2025-08-29 14:18:18.288307 | orchestrator | Git commit: 9f9e405 2025-08-29 14:18:18.288318 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 14:18:18.288331 | orchestrator | OS/Arch: linux/amd64 2025-08-29 14:18:18.288342 | orchestrator | Context: default 2025-08-29 14:18:18.288352 | orchestrator | 2025-08-29 14:18:18.288363 | orchestrator | Server: Docker Engine - Community 2025-08-29 14:18:18.288374 | orchestrator | Engine: 2025-08-29 14:18:18.288386 | orchestrator | Version: 27.5.1 2025-08-29 14:18:18.288396 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-08-29 14:18:18.288438 | orchestrator | Go version: go1.22.11 2025-08-29 14:18:18.288450 | orchestrator | Git commit: 4c9b3b0 2025-08-29 14:18:18.288460 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 14:18:18.288471 | orchestrator | OS/Arch: linux/amd64 2025-08-29 14:18:18.288482 | orchestrator | Experimental: false 2025-08-29 14:18:18.288493 | orchestrator | containerd: 2025-08-29 14:18:18.288514 | orchestrator | Version: 1.7.27 2025-08-29 14:18:18.288526 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-08-29 14:18:18.288538 | orchestrator | runc: 2025-08-29 14:18:18.288548 | orchestrator | Version: 1.2.5 2025-08-29 14:18:18.288559 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-08-29 14:18:18.288570 | orchestrator | docker-init: 2025-08-29 14:18:18.288581 | orchestrator | Version: 0.19.0 2025-08-29 14:18:18.288592 | orchestrator | GitCommit: de40ad0 2025-08-29 14:18:18.291645 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-08-29 14:18:18.299139 | orchestrator | + set -e 2025-08-29 14:18:18.299198 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 14:18:18.299212 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 14:18:18.299224 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 14:18:18.299234 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 14:18:18.299245 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 14:18:18.299257 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 14:18:18.299268 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 14:18:18.299279 | orchestrator | ++ export MANAGER_VERSION=latest 2025-08-29 14:18:18.299290 | orchestrator | ++ MANAGER_VERSION=latest 2025-08-29 14:18:18.299301 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 14:18:18.299312 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 14:18:18.299323 | orchestrator | ++ export ARA=false 2025-08-29 14:18:18.299334 | orchestrator | ++ ARA=false 2025-08-29 14:18:18.299345 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 14:18:18.299356 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 14:18:18.299366 | orchestrator | ++ export TEMPEST=false 2025-08-29 14:18:18.299377 | orchestrator | ++ TEMPEST=false 2025-08-29 14:18:18.299388 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 14:18:18.299398 | orchestrator | ++ IS_ZUUL=true 2025-08-29 14:18:18.299409 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.249 2025-08-29 14:18:18.299420 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.249 2025-08-29 14:18:18.299431 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 14:18:18.299442 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 14:18:18.299452 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 14:18:18.299463 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 14:18:18.299484 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 14:18:18.299496 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 14:18:18.299507 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 14:18:18.299517 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 14:18:18.299528 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 14:18:18.299538 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 14:18:18.299549 | orchestrator | ++ INTERACTIVE=false 2025-08-29 14:18:18.299559 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 14:18:18.299575 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 14:18:18.299586 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-08-29 14:18:18.299596 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 14:18:18.299607 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-08-29 14:18:18.306342 | orchestrator | + set -e 2025-08-29 14:18:18.306390 | orchestrator | + VERSION=reef 2025-08-29 14:18:18.307910 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:18:18.313828 | orchestrator | + [[ -n ceph_version: reef ]] 2025-08-29 14:18:18.313871 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:18:18.320029 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-08-29 14:18:18.325737 | orchestrator | + set -e 2025-08-29 14:18:18.325766 | orchestrator | + VERSION=2024.2 2025-08-29 14:18:18.326578 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:18:18.330874 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-08-29 14:18:18.330901 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:18:18.336412 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-08-29 14:18:18.337242 | orchestrator | ++ semver latest 7.0.0 2025-08-29 14:18:18.397772 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 14:18:18.397836 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 14:18:18.397850 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-08-29 14:18:18.397862 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-08-29 14:18:18.486186 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 14:18:18.487577 | orchestrator | + source /opt/venv/bin/activate 2025-08-29 14:18:18.488818 | orchestrator | ++ deactivate nondestructive 2025-08-29 14:18:18.488843 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:18:18.488856 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:18:18.488868 | orchestrator | ++ hash -r 2025-08-29 14:18:18.488880 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:18:18.488891 | orchestrator | ++ unset VIRTUAL_ENV 2025-08-29 14:18:18.488907 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-08-29 14:18:18.488942 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-08-29 14:18:18.489032 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-08-29 14:18:18.489048 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-08-29 14:18:18.489059 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-08-29 14:18:18.489070 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-08-29 14:18:18.489086 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:18:18.489102 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:18:18.489135 | orchestrator | ++ export PATH 2025-08-29 14:18:18.489341 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:18:18.489358 | orchestrator | ++ '[' -z '' ']' 2025-08-29 14:18:18.489369 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-08-29 14:18:18.489384 | orchestrator | ++ PS1='(venv) ' 2025-08-29 14:18:18.489395 | orchestrator | ++ export PS1 2025-08-29 14:18:18.489406 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-08-29 14:18:18.489417 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-08-29 14:18:18.489431 | orchestrator | ++ hash -r 2025-08-29 14:18:18.489775 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-08-29 14:18:19.713577 | orchestrator | 2025-08-29 14:18:19.713659 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-08-29 14:18:19.713674 | orchestrator | 2025-08-29 14:18:19.713684 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 14:18:20.266489 | orchestrator | ok: [testbed-manager] 2025-08-29 14:18:20.266576 | orchestrator | 2025-08-29 14:18:20.266590 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 14:18:21.234215 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:21.234383 | orchestrator | 2025-08-29 14:18:21.234391 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-08-29 14:18:21.234396 | orchestrator | 2025-08-29 14:18:21.234400 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:18:23.550513 | orchestrator | ok: [testbed-manager] 2025-08-29 14:18:23.550579 | orchestrator | 2025-08-29 14:18:23.550586 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-08-29 14:18:23.599155 | orchestrator | ok: [testbed-manager] 2025-08-29 14:18:23.599189 | orchestrator | 2025-08-29 14:18:23.599196 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-08-29 14:18:24.038728 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:24.038786 | orchestrator | 2025-08-29 14:18:24.038792 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-08-29 14:18:24.073117 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:18:24.073144 | orchestrator | 2025-08-29 14:18:24.073149 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 14:18:24.431624 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:24.431693 | orchestrator | 2025-08-29 14:18:24.431699 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-08-29 14:18:24.488610 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:18:24.488654 | orchestrator | 2025-08-29 14:18:24.488659 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-08-29 14:18:24.838391 | orchestrator | ok: [testbed-manager] 2025-08-29 14:18:24.838509 | orchestrator | 2025-08-29 14:18:24.838517 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-08-29 14:18:24.935589 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:18:24.935673 | orchestrator | 2025-08-29 14:18:24.935689 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-08-29 14:18:24.935702 | orchestrator | 2025-08-29 14:18:24.935716 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:18:26.657087 | orchestrator | ok: [testbed-manager] 2025-08-29 14:18:26.657190 | orchestrator | 2025-08-29 14:18:26.657203 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-08-29 14:18:26.752975 | orchestrator | included: osism.services.traefik for testbed-manager 2025-08-29 14:18:26.753058 | orchestrator | 2025-08-29 14:18:26.753073 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-08-29 14:18:26.807899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-08-29 14:18:26.807981 | orchestrator | 2025-08-29 14:18:26.807996 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-08-29 14:18:27.883502 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-08-29 14:18:27.883600 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-08-29 14:18:27.883616 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-08-29 14:18:27.883635 | orchestrator | 2025-08-29 14:18:27.883655 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-08-29 14:18:29.673579 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-08-29 14:18:29.673687 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-08-29 14:18:29.673718 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-08-29 14:18:29.673732 | orchestrator | 2025-08-29 14:18:29.673746 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-08-29 14:18:30.330006 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:18:30.330157 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:30.330175 | orchestrator | 2025-08-29 14:18:30.330187 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-08-29 14:18:30.967963 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:18:30.968075 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:30.968092 | orchestrator | 2025-08-29 14:18:30.968130 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-08-29 14:18:31.030804 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:18:31.030896 | orchestrator | 2025-08-29 14:18:31.030911 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-08-29 14:18:31.389194 | orchestrator | ok: [testbed-manager] 2025-08-29 14:18:31.389288 | orchestrator | 2025-08-29 14:18:31.389305 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-08-29 14:18:31.461330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-08-29 14:18:31.461423 | orchestrator | 2025-08-29 14:18:31.461439 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-08-29 14:18:32.489989 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:32.490168 | orchestrator | 2025-08-29 14:18:32.490180 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-08-29 14:18:33.311964 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:33.312087 | orchestrator | 2025-08-29 14:18:33.312137 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-08-29 14:18:45.822195 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:45.822342 | orchestrator | 2025-08-29 14:18:45.822358 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-08-29 14:18:45.869687 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:18:45.869801 | orchestrator | 2025-08-29 14:18:45.869818 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-08-29 14:18:45.869831 | orchestrator | 2025-08-29 14:18:45.869843 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:18:48.525237 | orchestrator | ok: [testbed-manager] 2025-08-29 14:18:48.525364 | orchestrator | 2025-08-29 14:18:48.525414 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-08-29 14:18:48.633572 | orchestrator | included: osism.services.manager for testbed-manager 2025-08-29 14:18:48.633690 | orchestrator | 2025-08-29 14:18:48.633706 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-08-29 14:18:48.692236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:18:48.692326 | orchestrator | 2025-08-29 14:18:48.692339 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-08-29 14:18:51.119963 | orchestrator | ok: [testbed-manager] 2025-08-29 14:18:51.120158 | orchestrator | 2025-08-29 14:18:51.120178 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-08-29 14:18:51.174802 | orchestrator | ok: [testbed-manager] 2025-08-29 14:18:51.174907 | orchestrator | 2025-08-29 14:18:51.174923 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-08-29 14:18:51.296991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-08-29 14:18:51.297119 | orchestrator | 2025-08-29 14:18:51.297137 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-08-29 14:18:54.091427 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-08-29 14:18:54.091521 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-08-29 14:18:54.091534 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-08-29 14:18:54.091547 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-08-29 14:18:54.091558 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-08-29 14:18:54.091570 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-08-29 14:18:54.091581 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-08-29 14:18:54.091592 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-08-29 14:18:54.091603 | orchestrator | 2025-08-29 14:18:54.091615 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-08-29 14:18:54.699540 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:54.699630 | orchestrator | 2025-08-29 14:18:54.699645 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-08-29 14:18:55.329731 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:55.329811 | orchestrator | 2025-08-29 14:18:55.329821 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-08-29 14:18:55.399337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-08-29 14:18:55.399418 | orchestrator | 2025-08-29 14:18:55.399430 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-08-29 14:18:56.592663 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-08-29 14:18:56.592756 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-08-29 14:18:56.592771 | orchestrator | 2025-08-29 14:18:56.592784 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-08-29 14:18:57.214660 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:57.214774 | orchestrator | 2025-08-29 14:18:57.214789 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-08-29 14:18:57.270928 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:18:57.270961 | orchestrator | 2025-08-29 14:18:57.270973 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-08-29 14:18:57.326804 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:18:57.326836 | orchestrator | 2025-08-29 14:18:57.326848 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-08-29 14:18:57.380669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-08-29 14:18:57.380697 | orchestrator | 2025-08-29 14:18:57.380709 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-08-29 14:18:58.738292 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:18:58.738392 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:18:58.738434 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:58.738448 | orchestrator | 2025-08-29 14:18:58.738460 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-08-29 14:18:59.361398 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:59.361517 | orchestrator | 2025-08-29 14:18:59.361532 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-08-29 14:18:59.418253 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:18:59.418296 | orchestrator | 2025-08-29 14:18:59.418308 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-08-29 14:18:59.513178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-08-29 14:18:59.513258 | orchestrator | 2025-08-29 14:18:59.513272 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-08-29 14:19:00.057444 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:00.057540 | orchestrator | 2025-08-29 14:19:00.057555 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-08-29 14:19:00.445392 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:00.445483 | orchestrator | 2025-08-29 14:19:00.445499 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-08-29 14:19:01.624973 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-08-29 14:19:01.625734 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-08-29 14:19:01.625767 | orchestrator | 2025-08-29 14:19:01.625781 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-08-29 14:19:02.223208 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:02.223298 | orchestrator | 2025-08-29 14:19:02.223314 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-08-29 14:19:02.604728 | orchestrator | ok: [testbed-manager] 2025-08-29 14:19:02.604815 | orchestrator | 2025-08-29 14:19:02.604831 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-08-29 14:19:02.955239 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:02.955365 | orchestrator | 2025-08-29 14:19:02.955382 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-08-29 14:19:02.992213 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:19:02.992280 | orchestrator | 2025-08-29 14:19:02.992293 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-08-29 14:19:03.053332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-08-29 14:19:03.053421 | orchestrator | 2025-08-29 14:19:03.053434 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-08-29 14:19:03.099843 | orchestrator | ok: [testbed-manager] 2025-08-29 14:19:03.099951 | orchestrator | 2025-08-29 14:19:03.099968 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-08-29 14:19:05.097013 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-08-29 14:19:05.097135 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-08-29 14:19:05.097153 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-08-29 14:19:05.097165 | orchestrator | 2025-08-29 14:19:05.097177 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-08-29 14:19:05.797819 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:05.797918 | orchestrator | 2025-08-29 14:19:05.797935 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-08-29 14:19:06.477649 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:06.477781 | orchestrator | 2025-08-29 14:19:06.477797 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-08-29 14:19:07.168061 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:07.168982 | orchestrator | 2025-08-29 14:19:07.169015 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-08-29 14:19:07.241514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-08-29 14:19:07.241596 | orchestrator | 2025-08-29 14:19:07.241611 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-08-29 14:19:07.279515 | orchestrator | ok: [testbed-manager] 2025-08-29 14:19:07.279567 | orchestrator | 2025-08-29 14:19:07.279582 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-08-29 14:19:07.968286 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-08-29 14:19:07.968391 | orchestrator | 2025-08-29 14:19:07.968400 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-08-29 14:19:08.052982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-08-29 14:19:08.053133 | orchestrator | 2025-08-29 14:19:08.053150 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-08-29 14:19:08.747774 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:08.747893 | orchestrator | 2025-08-29 14:19:08.747908 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-08-29 14:19:09.325070 | orchestrator | ok: [testbed-manager] 2025-08-29 14:19:09.325208 | orchestrator | 2025-08-29 14:19:09.325220 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-08-29 14:19:09.380042 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:19:09.380158 | orchestrator | 2025-08-29 14:19:09.380168 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-08-29 14:19:09.426635 | orchestrator | ok: [testbed-manager] 2025-08-29 14:19:09.426694 | orchestrator | 2025-08-29 14:19:09.426706 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-08-29 14:19:10.232953 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:10.233104 | orchestrator | 2025-08-29 14:19:10.233122 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-08-29 14:20:39.544707 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:39.544829 | orchestrator | 2025-08-29 14:20:39.544847 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-08-29 14:20:40.569731 | orchestrator | ok: [testbed-manager] 2025-08-29 14:20:40.569835 | orchestrator | 2025-08-29 14:20:40.569851 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-08-29 14:20:40.628293 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:20:40.628485 | orchestrator | 2025-08-29 14:20:40.628505 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-08-29 14:20:44.207600 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:44.207715 | orchestrator | 2025-08-29 14:20:44.207733 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-08-29 14:20:44.338377 | orchestrator | ok: [testbed-manager] 2025-08-29 14:20:44.338494 | orchestrator | 2025-08-29 14:20:44.338509 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 14:20:44.338522 | orchestrator | 2025-08-29 14:20:44.338533 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-08-29 14:20:44.393394 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:20:44.393490 | orchestrator | 2025-08-29 14:20:44.393504 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-08-29 14:21:44.454315 | orchestrator | Pausing for 60 seconds 2025-08-29 14:21:44.454427 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:44.454441 | orchestrator | 2025-08-29 14:21:44.454454 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-08-29 14:21:47.576928 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:47.577112 | orchestrator | 2025-08-29 14:21:47.577135 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-08-29 14:22:29.182342 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-08-29 14:22:29.182480 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-08-29 14:22:29.182497 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:29.182511 | orchestrator | 2025-08-29 14:22:29.182523 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-08-29 14:22:38.745752 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:38.745889 | orchestrator | 2025-08-29 14:22:38.745906 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-08-29 14:22:38.835733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-08-29 14:22:38.835834 | orchestrator | 2025-08-29 14:22:38.835848 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 14:22:38.835861 | orchestrator | 2025-08-29 14:22:38.835872 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-08-29 14:22:38.887378 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:22:38.887473 | orchestrator | 2025-08-29 14:22:38.887489 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:22:38.887503 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 14:22:38.887514 | orchestrator | 2025-08-29 14:22:39.003124 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 14:22:39.003234 | orchestrator | + deactivate 2025-08-29 14:22:39.003250 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-08-29 14:22:39.003264 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:22:39.003275 | orchestrator | + export PATH 2025-08-29 14:22:39.003286 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-08-29 14:22:39.003320 | orchestrator | + '[' -n '' ']' 2025-08-29 14:22:39.003332 | orchestrator | + hash -r 2025-08-29 14:22:39.003343 | orchestrator | + '[' -n '' ']' 2025-08-29 14:22:39.003354 | orchestrator | + unset VIRTUAL_ENV 2025-08-29 14:22:39.003365 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-08-29 14:22:39.003376 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-08-29 14:22:39.003387 | orchestrator | + unset -f deactivate 2025-08-29 14:22:39.003398 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-08-29 14:22:39.011191 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 14:22:39.011218 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 14:22:39.011229 | orchestrator | + local max_attempts=60 2025-08-29 14:22:39.011240 | orchestrator | + local name=ceph-ansible 2025-08-29 14:22:39.011252 | orchestrator | + local attempt_num=1 2025-08-29 14:22:39.012373 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:22:39.046792 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:22:39.046874 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 14:22:39.046884 | orchestrator | + local max_attempts=60 2025-08-29 14:22:39.046892 | orchestrator | + local name=kolla-ansible 2025-08-29 14:22:39.046900 | orchestrator | + local attempt_num=1 2025-08-29 14:22:39.047731 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 14:22:39.081720 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:22:39.081768 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 14:22:39.081778 | orchestrator | + local max_attempts=60 2025-08-29 14:22:39.081787 | orchestrator | + local name=osism-ansible 2025-08-29 14:22:39.081797 | orchestrator | + local attempt_num=1 2025-08-29 14:22:39.082582 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 14:22:39.120156 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:22:39.120207 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 14:22:39.120219 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 14:22:39.776651 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-08-29 14:22:39.989159 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-08-29 14:22:39.989284 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-08-29 14:22:39.989299 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-08-29 14:22:39.989310 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-08-29 14:22:39.989347 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-08-29 14:22:39.989368 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-08-29 14:22:39.989378 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-08-29 14:22:39.989388 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-08-29 14:22:39.989397 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-08-29 14:22:39.989407 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-08-29 14:22:39.989416 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-08-29 14:22:39.989426 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-08-29 14:22:39.989435 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-08-29 14:22:39.989445 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-08-29 14:22:39.989454 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-08-29 14:22:39.996373 | orchestrator | ++ semver latest 7.0.0 2025-08-29 14:22:40.035614 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 14:22:40.035711 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 14:22:40.035726 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-08-29 14:22:40.040315 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-08-29 14:22:52.221274 | orchestrator | 2025-08-29 14:22:52 | INFO  | Task d6ffd81d-d818-4625-8abd-776adfc1f8b0 (resolvconf) was prepared for execution. 2025-08-29 14:22:52.221374 | orchestrator | 2025-08-29 14:22:52 | INFO  | It takes a moment until task d6ffd81d-d818-4625-8abd-776adfc1f8b0 (resolvconf) has been started and output is visible here. 2025-08-29 14:23:04.504202 | orchestrator | 2025-08-29 14:23:04.504320 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-08-29 14:23:04.504337 | orchestrator | 2025-08-29 14:23:04.504351 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:23:04.504362 | orchestrator | Friday 29 August 2025 14:22:55 +0000 (0:00:00.134) 0:00:00.134 ********* 2025-08-29 14:23:04.504373 | orchestrator | ok: [testbed-manager] 2025-08-29 14:23:04.504385 | orchestrator | 2025-08-29 14:23:04.504396 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 14:23:04.504412 | orchestrator | Friday 29 August 2025 14:22:59 +0000 (0:00:03.438) 0:00:03.573 ********* 2025-08-29 14:23:04.504423 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:23:04.504472 | orchestrator | 2025-08-29 14:23:04.504484 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 14:23:04.504495 | orchestrator | Friday 29 August 2025 14:22:59 +0000 (0:00:00.063) 0:00:03.637 ********* 2025-08-29 14:23:04.504506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-08-29 14:23:04.504518 | orchestrator | 2025-08-29 14:23:04.504528 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 14:23:04.504539 | orchestrator | Friday 29 August 2025 14:22:59 +0000 (0:00:00.070) 0:00:03.707 ********* 2025-08-29 14:23:04.504550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:23:04.504560 | orchestrator | 2025-08-29 14:23:04.504571 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 14:23:04.504582 | orchestrator | Friday 29 August 2025 14:22:59 +0000 (0:00:00.074) 0:00:03.782 ********* 2025-08-29 14:23:04.504593 | orchestrator | ok: [testbed-manager] 2025-08-29 14:23:04.504603 | orchestrator | 2025-08-29 14:23:04.504614 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 14:23:04.504624 | orchestrator | Friday 29 August 2025 14:23:00 +0000 (0:00:00.984) 0:00:04.767 ********* 2025-08-29 14:23:04.504635 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:23:04.504645 | orchestrator | 2025-08-29 14:23:04.504656 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 14:23:04.504666 | orchestrator | Friday 29 August 2025 14:23:00 +0000 (0:00:00.065) 0:00:04.832 ********* 2025-08-29 14:23:04.504677 | orchestrator | ok: [testbed-manager] 2025-08-29 14:23:04.504687 | orchestrator | 2025-08-29 14:23:04.504698 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 14:23:04.504708 | orchestrator | Friday 29 August 2025 14:23:00 +0000 (0:00:00.427) 0:00:05.259 ********* 2025-08-29 14:23:04.504719 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:23:04.504729 | orchestrator | 2025-08-29 14:23:04.504740 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 14:23:04.504752 | orchestrator | Friday 29 August 2025 14:23:00 +0000 (0:00:00.064) 0:00:05.324 ********* 2025-08-29 14:23:04.504763 | orchestrator | changed: [testbed-manager] 2025-08-29 14:23:04.504773 | orchestrator | 2025-08-29 14:23:04.504784 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 14:23:04.504795 | orchestrator | Friday 29 August 2025 14:23:01 +0000 (0:00:00.454) 0:00:05.779 ********* 2025-08-29 14:23:04.504805 | orchestrator | changed: [testbed-manager] 2025-08-29 14:23:04.504816 | orchestrator | 2025-08-29 14:23:04.504826 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 14:23:04.504837 | orchestrator | Friday 29 August 2025 14:23:02 +0000 (0:00:00.980) 0:00:06.759 ********* 2025-08-29 14:23:04.504847 | orchestrator | ok: [testbed-manager] 2025-08-29 14:23:04.504858 | orchestrator | 2025-08-29 14:23:04.504868 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 14:23:04.504879 | orchestrator | Friday 29 August 2025 14:23:03 +0000 (0:00:00.876) 0:00:07.635 ********* 2025-08-29 14:23:04.504890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-08-29 14:23:04.504901 | orchestrator | 2025-08-29 14:23:04.504919 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 14:23:04.504930 | orchestrator | Friday 29 August 2025 14:23:03 +0000 (0:00:00.079) 0:00:07.715 ********* 2025-08-29 14:23:04.504941 | orchestrator | changed: [testbed-manager] 2025-08-29 14:23:04.504951 | orchestrator | 2025-08-29 14:23:04.504962 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:23:04.504973 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:23:04.505012 | orchestrator | 2025-08-29 14:23:04.505024 | orchestrator | 2025-08-29 14:23:04.505034 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:23:04.505045 | orchestrator | Friday 29 August 2025 14:23:04 +0000 (0:00:01.019) 0:00:08.735 ********* 2025-08-29 14:23:04.505056 | orchestrator | =============================================================================== 2025-08-29 14:23:04.505067 | orchestrator | Gathering Facts --------------------------------------------------------- 3.44s 2025-08-29 14:23:04.505078 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.02s 2025-08-29 14:23:04.505088 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.98s 2025-08-29 14:23:04.505099 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.98s 2025-08-29 14:23:04.505110 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.88s 2025-08-29 14:23:04.505121 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.45s 2025-08-29 14:23:04.505149 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.43s 2025-08-29 14:23:04.505160 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-08-29 14:23:04.505171 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-08-29 14:23:04.505182 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-08-29 14:23:04.505192 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-08-29 14:23:04.505203 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-08-29 14:23:04.505214 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-08-29 14:23:04.698371 | orchestrator | + osism apply sshconfig 2025-08-29 14:23:16.522342 | orchestrator | 2025-08-29 14:23:16 | INFO  | Task 27857e98-07a1-4c7d-990b-c52bfcfdfb34 (sshconfig) was prepared for execution. 2025-08-29 14:23:16.522478 | orchestrator | 2025-08-29 14:23:16 | INFO  | It takes a moment until task 27857e98-07a1-4c7d-990b-c52bfcfdfb34 (sshconfig) has been started and output is visible here. 2025-08-29 14:23:28.376362 | orchestrator | 2025-08-29 14:23:28.376474 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-08-29 14:23:28.376490 | orchestrator | 2025-08-29 14:23:28.376502 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-08-29 14:23:28.376514 | orchestrator | Friday 29 August 2025 14:23:20 +0000 (0:00:00.161) 0:00:00.161 ********* 2025-08-29 14:23:28.376525 | orchestrator | ok: [testbed-manager] 2025-08-29 14:23:28.376538 | orchestrator | 2025-08-29 14:23:28.376549 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-08-29 14:23:28.376560 | orchestrator | Friday 29 August 2025 14:23:21 +0000 (0:00:00.551) 0:00:00.713 ********* 2025-08-29 14:23:28.376571 | orchestrator | changed: [testbed-manager] 2025-08-29 14:23:28.376583 | orchestrator | 2025-08-29 14:23:28.376594 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-08-29 14:23:28.376605 | orchestrator | Friday 29 August 2025 14:23:21 +0000 (0:00:00.503) 0:00:01.216 ********* 2025-08-29 14:23:28.376616 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-08-29 14:23:28.376627 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-08-29 14:23:28.376638 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-08-29 14:23:28.376651 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-08-29 14:23:28.376661 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-08-29 14:23:28.376672 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-08-29 14:23:28.376698 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-08-29 14:23:28.376710 | orchestrator | 2025-08-29 14:23:28.376743 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-08-29 14:23:28.376754 | orchestrator | Friday 29 August 2025 14:23:27 +0000 (0:00:05.871) 0:00:07.087 ********* 2025-08-29 14:23:28.376765 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:23:28.376776 | orchestrator | 2025-08-29 14:23:28.376787 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-08-29 14:23:28.376798 | orchestrator | Friday 29 August 2025 14:23:27 +0000 (0:00:00.069) 0:00:07.157 ********* 2025-08-29 14:23:28.376808 | orchestrator | changed: [testbed-manager] 2025-08-29 14:23:28.376819 | orchestrator | 2025-08-29 14:23:28.376830 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:23:28.376842 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:23:28.376854 | orchestrator | 2025-08-29 14:23:28.376865 | orchestrator | 2025-08-29 14:23:28.376876 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:23:28.376889 | orchestrator | Friday 29 August 2025 14:23:28 +0000 (0:00:00.602) 0:00:07.759 ********* 2025-08-29 14:23:28.376900 | orchestrator | =============================================================================== 2025-08-29 14:23:28.376912 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.87s 2025-08-29 14:23:28.376924 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2025-08-29 14:23:28.376936 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2025-08-29 14:23:28.376948 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.50s 2025-08-29 14:23:28.376960 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-08-29 14:23:28.645930 | orchestrator | + osism apply known-hosts 2025-08-29 14:23:40.640853 | orchestrator | 2025-08-29 14:23:40 | INFO  | Task 53a27687-fdd9-4b59-9f04-e5ba29131151 (known-hosts) was prepared for execution. 2025-08-29 14:23:40.641047 | orchestrator | 2025-08-29 14:23:40 | INFO  | It takes a moment until task 53a27687-fdd9-4b59-9f04-e5ba29131151 (known-hosts) has been started and output is visible here. 2025-08-29 14:23:57.914924 | orchestrator | 2025-08-29 14:23:57.915135 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-08-29 14:23:57.915152 | orchestrator | 2025-08-29 14:23:57.915165 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-08-29 14:23:57.915178 | orchestrator | Friday 29 August 2025 14:23:44 +0000 (0:00:00.171) 0:00:00.171 ********* 2025-08-29 14:23:57.915190 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 14:23:57.915201 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 14:23:57.915212 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 14:23:57.915224 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 14:23:57.915235 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 14:23:57.915245 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 14:23:57.915256 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 14:23:57.915267 | orchestrator | 2025-08-29 14:23:57.915277 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-08-29 14:23:57.915290 | orchestrator | Friday 29 August 2025 14:23:51 +0000 (0:00:07.017) 0:00:07.188 ********* 2025-08-29 14:23:57.915302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 14:23:57.915315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 14:23:57.915326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 14:23:57.915365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 14:23:57.915376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 14:23:57.915400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 14:23:57.915413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 14:23:57.915424 | orchestrator | 2025-08-29 14:23:57.915436 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:23:57.915448 | orchestrator | Friday 29 August 2025 14:23:51 +0000 (0:00:00.163) 0:00:07.352 ********* 2025-08-29 14:23:57.915461 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHKf0LQlxecsARAchv812lKMeCEd1zpbn4l6V1PK998GhSxCMMx9EmPEK+aUmxi8HydNoiDe+fV2SaQ8pBDKJvo=) 2025-08-29 14:23:57.915479 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSX7t1RyU0H8QFTzz89cgbuhKHkTHWa/jL++r9t0OE9I+xb4WlVVW/H3wVWX45gUjaxXUlYaOT8YNKGVTQJiaYin081Jk+B9N+A3uaIkKmIUVLD95wFACm5TaSikf2wJfU9PM4EQHtjjWBa7GYNW0ns0h3X4zaJilzx7PpFe3aOv88aUWyZmMqGdX90seQ3V/kRmLHu5tfRDTNPyqd26qY/Kaxz5ShPotqzX0oeLswXbIDpsJnL09ioDuVBLyWdGK8F9UGXYNOmTq+z5R3qdQzTn+b9NqXwKVB9tXkondBopI3GPDmZ3WBsPZyVMuNIEkwDx102OrM1TwJGOiMQHKFNu+vaFLtb081E3WSBz6H/hZGUWxPnzxvlaTAVShfFwFmYNRQMxSBkhEZhbogp+MordIy2d1FPPz6/YTSrnMn+r+6oCgAAEVzqrR8Zg/se7jJEFBEqqTl2xbmqDHCMJP3pB2pPTGH4TRHQtQB1OjXv2E2zLgz9J0J02TX6RNaZIM=) 2025-08-29 14:23:57.915494 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBdnsBWr8fWWWbUmiPkuo+jRNAOQnl3iG29mQ47H5qoq) 2025-08-29 14:23:57.915508 | orchestrator | 2025-08-29 14:23:57.915520 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:23:57.915532 | orchestrator | Friday 29 August 2025 14:23:52 +0000 (0:00:01.115) 0:00:08.468 ********* 2025-08-29 14:23:57.915545 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBOzWLNlvnEwxk8OetlGG41rqzwUxs2Hnnnypgcai+KztnVbkFQ+6HJ5zYvvKwP+Jk/FsCsugHIaivv6bTrpPHM=) 2025-08-29 14:23:57.915589 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbNwgova3rTwJQWodplc9DXQihe8mfWkRh68aRPXtQWoQsb4QY5glJY6yiVvAdaAkOePRuzxyMYSROQ21jEo3Wbv97aRTko1llqs7BLRXNKIbBn83dkPvPYBrmYqsYonQtDOgpaJEt8hlu5r4+tA+fbEFfPFvTl0atG/K9j+D9IBSlD/Nv2q5jbtJL8h7kgj8rttFvVJAI0rYfcYo9QW4TZLDghx13izAN9RUUOroG0DaT2EbSQljX6C5lpJ0w13r1Cp8CgIuJnv5xm41Yd8qHa0KNrCZ9mJfjCWzrLuuZTCsF3tSE/CO6Is8bOE4sd9LQ22DkedVCbG7PY3z13xOqXagRgm92ZqkSMd0NmHrGINiODwL/F0cDotp9g1/jW05A3bS2iF0qvulSh11FQ7YwDYK2tjyPvab67MXNc36nL4wsXdWBndIv8NZDVOm5N5rxfvSmoUjL+r7i9xshgQFzFoQi723MHfEwWzP6YYhj+sUW41ZMO+1pau8binQevz8=) 2025-08-29 14:23:57.915603 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAZeCBWtyag498na9WR1HOcEVwe3aEsduMpohedehvwS) 2025-08-29 14:23:57.915615 | orchestrator | 2025-08-29 14:23:57.915627 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:23:57.915639 | orchestrator | Friday 29 August 2025 14:23:53 +0000 (0:00:00.944) 0:00:09.413 ********* 2025-08-29 14:23:57.915651 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNWveBSv5vjUdBYPqhcBOB8IlCtfbAzzFsp7bSA09XFlHvGrjb+YCnpX2jE3isOB5Wm66Z8fsDvHvi6sAvMdLQc=) 2025-08-29 14:23:57.915674 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMFvJu9qEV70Kr4B4fgJdIKMof9b8qtpBSgivdINR8CZSOVUotGkkYXkoSHM8HpkC5B0bcuYGSvZ0fbXDxc9PZ7Jm4ti1LL/mDpcSg7z2wJJAl4hS2Ic+X9mDAYYHpnI56IHOrDYf9/ogUbb15tM2nudNVfsbBTMsLwCVlI9os1osldiIxt3DfK05KkNGTxmIKll2Pn37HzLBs7+IwoX1wGHfI62wngNP0kmyYPGlFj0/C+etC3vPlvZdMUye6nBp6tYOoOSidCZmdhAtCLSYnwAKJOpD15aHol9MIyma2TFPPMy+KCzYrgGAd+XftfFntBkqDeLwynlkK/tmB7JqRB2ORwsyJXM45lfikkK5g+Va3Iii0SMSZ42xgB8+WoyF+WozDV4g6KAfVjf9YAYlz323gzUzx83NR6BVmNwOS7N0ga6mqE04HnM0Re0eroaAHnI5x7QxRS9w2keuh2+XFm/HyHiu6o4EhR9ZvbnhfoJsIwGHTvUNuQ23R2cZ6nas=) 2025-08-29 14:23:57.915686 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASKBcGjCp8YQVtkhm7+d7ejS/OEP8CKhOltV/FE5vUP) 2025-08-29 14:23:57.915698 | orchestrator | 2025-08-29 14:23:57.915711 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:23:57.915723 | orchestrator | Friday 29 August 2025 14:23:54 +0000 (0:00:00.943) 0:00:10.356 ********* 2025-08-29 14:23:57.915802 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIwyz+5XvZBCUHwXTWAywv6lovnPsfu5BiXObWq+hvC55L6PTjchT3t6PZ3X0D0vFwzYvRQuIa7sNjny0/PUWCQ=) 2025-08-29 14:23:57.915814 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMpICPVwdx2zQf3Mcnqzp0YbeaT+OFJIfvuMS2G/O6o8) 2025-08-29 14:23:57.915825 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuBMczOFYGH1oYZOjcCNe7s5zyACqHlqgdiIwQC++nbEVpt2f3Ack0gp2nBS/Gm+zVR1NBJLTOmrTNx2FPqsFyuEOx3A8o1ZqclgrkndJdXTChY8NVNlnxJsQQNztoywn50WbwlybRXdjcTBjsggPUErNhn/dpVUlhToKioP0grabLOPMLl9Aup+3Tngo/iUgDy/xBlOlIkCLn0w8yoH0Y7yFk+KF6ePlAcAKd+4T1X3BgJK5FJQPtD7sv5oNsumEZliRlz9v+Vg7ZDGxO3s5n44P8LXyggJnyLvP5eIpEXvQg+7a99ytfutKPmTs7W5Ko54cv0YUD0xobKO+RoYLBxG7cWYK5L9BPilaAhi0mVz1ywHz/NN/MQMlMMbk7ic+qptLxfmzXKaphaxKUf8aS37SXxIEzk7zmzCkeYNDkGsj8/ztb2Ca+jnOcTIjBYq9SR0FR/6uByXxZTwuLyl1sxmroKrgPV/w81yrWK9zCJL7PrnmVDDv8PCnk+1ch8uU=) 2025-08-29 14:23:57.915836 | orchestrator | 2025-08-29 14:23:57.915846 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:23:57.915857 | orchestrator | Friday 29 August 2025 14:23:55 +0000 (0:00:01.078) 0:00:11.435 ********* 2025-08-29 14:23:57.915868 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdrK3w8EoOeY99ARyBuJciOorwR4dED+jlW7JRzuBpnNmo2PGaBmTnnWOidt3hqp/pUDeqy3y1Ut5HDZk3WIPof82He5jN9tTlvNL6D3fohalNEPHkVBin6vJHuWYWz5N3SvZuGyEZR+CoWOeJpGzPy0jl/StT1jjxy19e78RxdDkmo7S6UJe+PoiBPp1PpHEmLSuJEkHW4aoUZPBd+Brg3qaCvCAuJQEuoI8tzOc2Md767VOgffsBXKIamG7dguPau9Sc4Cipap+xRJPsTAni9weRzMsU6pILHXF+olI2SvsFGcQEjkDcRG3NMLxi7PrQFkQcR/PiB43C08Lnvbt0ShLZyb9oFNMSLYAJBCb8STFtNiGE5cSToBMmP3KRetItNHzu9gd5CtRxQh5u5lU08Eapf5Ml6PGa6zzuUNHSjd8QvLdZYRyx2LPaRBH5DC/95thCqENoImr+kP6JsZRndjOyUTfskFZ8TX7fnbNvsPfW5QV+QRzMAwDPtciPIhk=) 2025-08-29 14:23:57.915879 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMbTlef8BFEjgAlOckAxO8OtoAth4BHBjVi/z3fArdNhZRkgrnxF3+IuctoehaoIDp5ccNcrxmOlXXYhqu5F/oQ=) 2025-08-29 14:23:57.915890 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILqJ+tbhDyLRLgHQ+LgjeS8o1SZq8/Herc/5ArNFRgXT) 2025-08-29 14:23:57.915900 | orchestrator | 2025-08-29 14:23:57.915911 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:23:57.915922 | orchestrator | Friday 29 August 2025 14:23:56 +0000 (0:00:01.022) 0:00:12.458 ********* 2025-08-29 14:23:57.915941 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJfvVbGpsWFQ0sFUGTYvYQsLPPE0F8GI6Ez13Kbs4qkY3YuxmFtW2gXAYchowKEaFC9SWQczEz48jcpTFTJlsDaL7DtgKaPbxzMuKdyJKhm/zh4S4ejw0LN/8oihIBvekl8NxgD+wvtckqx/Kf0WVYdTMgeg9rFtCQdBa7KETy641Ld9LBFGrD/m3V7djC7s9fHFDGK0qXMx3o0bk4g5wC0oM2Ao7qIn1MGtJ6DDvzHBYJYmDalwZKGCgMEjfFPats4MSTxoMPUgUnYftLqOic6Ngt/jE5nhJetRWlSSUyG+fuHFsL6MrIV2/lWtdnmgxtbN5pCagvHLWGmTVE0ie+dGmR+mHZg2CHzp7uw1i66ZCWkK9YpT1kU+34qqCEcmBKpTYVLp/5pUkpGrCcLLF2cRaAm4u/aftKBMlvcQsEtq46WMIW5sf1HbImwar6lnTht+hDVs7wUKst+Yo3R2MNxbA5TxamTqm61FiITzv2YlGYjYUT73N/kROYT/qrynk=) 2025-08-29 14:24:08.618749 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAzDK9NxWF3/EZGnlspr2gwywjUlZ2ye8scKR4Wem0WZjxlhe7nbj82nMJn+m2C66TBjd6IJHerkOh/+/brMs+8=) 2025-08-29 14:24:08.618919 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC6ogOG7GnJK5GUOinajinCPOMyl9nvoIdFN+3bQ3dHf) 2025-08-29 14:24:08.618938 | orchestrator | 2025-08-29 14:24:08.618992 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:08.619006 | orchestrator | Friday 29 August 2025 14:23:57 +0000 (0:00:01.070) 0:00:13.528 ********* 2025-08-29 14:24:08.619734 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC249Qz0zSNQnKDpcKfzP0FIR7PfcvKUmWzNzZ60ZqkOcqyQ6XKyW6bCwZ0QZVWU74kwdZ5cBqx4K7W5hkVYaPNxfHmb6DlMifBezpfKiWeiY2DJZ/8AHlfhVbPBYrXSuVLPM3dvEAgXJhi9UQN/GEbO4ZolQWreGmRKaDGZyAt9XVFyR0ZSILtjNi0AevJliWcvV6HdsMnjk5MQ3ahxC25aeJjE8XOKv3iaPonQsBGE0vkaQ4HlpAi5gIw98amRfEKTt66LyGDL8x6XREoqNjkeiG+O3C5f5Ar7O+8Qr9CHbhldDlWnAMp6Q+tvBh+jXziNmqLpovc050IwdesFoRu4KnWkrjcYrI1zypE+l7oRejtdIl/e5dX25RvDU6voeIjjzwOz+/yAZHwgenE1jTeZafgy3/uLUcM3n/RdR/yuXO1ZigHF60tshK5tEv2UUS9Duq/70/ITbRmkQzF6HpzrxoSFN+kSgQBJ1UIt1HAiAapywyGH73t6ax9KTZcDr8=) 2025-08-29 14:24:08.619759 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCl6gdLPyvjcJD0M12orN3199p+VsLpEQnC9Rvd6Mzmaj5MAULJChwnhsWEfsAtfDIxPd8iXT7EztU5K2oCEiGU=) 2025-08-29 14:24:08.619771 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfbxTJTU9WMJ8nWYPx38T4fKBGaQl6ihwy++vFyPEMR) 2025-08-29 14:24:08.619782 | orchestrator | 2025-08-29 14:24:08.619794 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-08-29 14:24:08.619806 | orchestrator | Friday 29 August 2025 14:23:58 +0000 (0:00:01.075) 0:00:14.604 ********* 2025-08-29 14:24:08.619843 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 14:24:08.619855 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 14:24:08.619866 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 14:24:08.619877 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 14:24:08.619888 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 14:24:08.619899 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 14:24:08.619909 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 14:24:08.619920 | orchestrator | 2025-08-29 14:24:08.619931 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-08-29 14:24:08.619970 | orchestrator | Friday 29 August 2025 14:24:04 +0000 (0:00:05.264) 0:00:19.868 ********* 2025-08-29 14:24:08.619984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 14:24:08.619997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 14:24:08.620008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 14:24:08.620019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 14:24:08.620058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 14:24:08.620069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 14:24:08.620080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 14:24:08.620091 | orchestrator | 2025-08-29 14:24:08.620102 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:08.620113 | orchestrator | Friday 29 August 2025 14:24:04 +0000 (0:00:00.166) 0:00:20.035 ********* 2025-08-29 14:24:08.620124 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHKf0LQlxecsARAchv812lKMeCEd1zpbn4l6V1PK998GhSxCMMx9EmPEK+aUmxi8HydNoiDe+fV2SaQ8pBDKJvo=) 2025-08-29 14:24:08.620160 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSX7t1RyU0H8QFTzz89cgbuhKHkTHWa/jL++r9t0OE9I+xb4WlVVW/H3wVWX45gUjaxXUlYaOT8YNKGVTQJiaYin081Jk+B9N+A3uaIkKmIUVLD95wFACm5TaSikf2wJfU9PM4EQHtjjWBa7GYNW0ns0h3X4zaJilzx7PpFe3aOv88aUWyZmMqGdX90seQ3V/kRmLHu5tfRDTNPyqd26qY/Kaxz5ShPotqzX0oeLswXbIDpsJnL09ioDuVBLyWdGK8F9UGXYNOmTq+z5R3qdQzTn+b9NqXwKVB9tXkondBopI3GPDmZ3WBsPZyVMuNIEkwDx102OrM1TwJGOiMQHKFNu+vaFLtb081E3WSBz6H/hZGUWxPnzxvlaTAVShfFwFmYNRQMxSBkhEZhbogp+MordIy2d1FPPz6/YTSrnMn+r+6oCgAAEVzqrR8Zg/se7jJEFBEqqTl2xbmqDHCMJP3pB2pPTGH4TRHQtQB1OjXv2E2zLgz9J0J02TX6RNaZIM=) 2025-08-29 14:24:08.620173 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBdnsBWr8fWWWbUmiPkuo+jRNAOQnl3iG29mQ47H5qoq) 2025-08-29 14:24:08.620184 | orchestrator | 2025-08-29 14:24:08.620194 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:08.620205 | orchestrator | Friday 29 August 2025 14:24:05 +0000 (0:00:01.047) 0:00:21.082 ********* 2025-08-29 14:24:08.620217 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbNwgova3rTwJQWodplc9DXQihe8mfWkRh68aRPXtQWoQsb4QY5glJY6yiVvAdaAkOePRuzxyMYSROQ21jEo3Wbv97aRTko1llqs7BLRXNKIbBn83dkPvPYBrmYqsYonQtDOgpaJEt8hlu5r4+tA+fbEFfPFvTl0atG/K9j+D9IBSlD/Nv2q5jbtJL8h7kgj8rttFvVJAI0rYfcYo9QW4TZLDghx13izAN9RUUOroG0DaT2EbSQljX6C5lpJ0w13r1Cp8CgIuJnv5xm41Yd8qHa0KNrCZ9mJfjCWzrLuuZTCsF3tSE/CO6Is8bOE4sd9LQ22DkedVCbG7PY3z13xOqXagRgm92ZqkSMd0NmHrGINiODwL/F0cDotp9g1/jW05A3bS2iF0qvulSh11FQ7YwDYK2tjyPvab67MXNc36nL4wsXdWBndIv8NZDVOm5N5rxfvSmoUjL+r7i9xshgQFzFoQi723MHfEwWzP6YYhj+sUW41ZMO+1pau8binQevz8=) 2025-08-29 14:24:08.620228 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBOzWLNlvnEwxk8OetlGG41rqzwUxs2Hnnnypgcai+KztnVbkFQ+6HJ5zYvvKwP+Jk/FsCsugHIaivv6bTrpPHM=) 2025-08-29 14:24:08.620239 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAZeCBWtyag498na9WR1HOcEVwe3aEsduMpohedehvwS) 2025-08-29 14:24:08.620249 | orchestrator | 2025-08-29 14:24:08.620260 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:08.620271 | orchestrator | Friday 29 August 2025 14:24:06 +0000 (0:00:01.070) 0:00:22.153 ********* 2025-08-29 14:24:08.620282 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNWveBSv5vjUdBYPqhcBOB8IlCtfbAzzFsp7bSA09XFlHvGrjb+YCnpX2jE3isOB5Wm66Z8fsDvHvi6sAvMdLQc=) 2025-08-29 14:24:08.620300 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMFvJu9qEV70Kr4B4fgJdIKMof9b8qtpBSgivdINR8CZSOVUotGkkYXkoSHM8HpkC5B0bcuYGSvZ0fbXDxc9PZ7Jm4ti1LL/mDpcSg7z2wJJAl4hS2Ic+X9mDAYYHpnI56IHOrDYf9/ogUbb15tM2nudNVfsbBTMsLwCVlI9os1osldiIxt3DfK05KkNGTxmIKll2Pn37HzLBs7+IwoX1wGHfI62wngNP0kmyYPGlFj0/C+etC3vPlvZdMUye6nBp6tYOoOSidCZmdhAtCLSYnwAKJOpD15aHol9MIyma2TFPPMy+KCzYrgGAd+XftfFntBkqDeLwynlkK/tmB7JqRB2ORwsyJXM45lfikkK5g+Va3Iii0SMSZ42xgB8+WoyF+WozDV4g6KAfVjf9YAYlz323gzUzx83NR6BVmNwOS7N0ga6mqE04HnM0Re0eroaAHnI5x7QxRS9w2keuh2+XFm/HyHiu6o4EhR9ZvbnhfoJsIwGHTvUNuQ23R2cZ6nas=) 2025-08-29 14:24:08.620320 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASKBcGjCp8YQVtkhm7+d7ejS/OEP8CKhOltV/FE5vUP) 2025-08-29 14:24:08.620331 | orchestrator | 2025-08-29 14:24:08.620342 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:08.620353 | orchestrator | Friday 29 August 2025 14:24:07 +0000 (0:00:01.053) 0:00:23.206 ********* 2025-08-29 14:24:08.620363 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMpICPVwdx2zQf3Mcnqzp0YbeaT+OFJIfvuMS2G/O6o8) 2025-08-29 14:24:08.620375 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuBMczOFYGH1oYZOjcCNe7s5zyACqHlqgdiIwQC++nbEVpt2f3Ack0gp2nBS/Gm+zVR1NBJLTOmrTNx2FPqsFyuEOx3A8o1ZqclgrkndJdXTChY8NVNlnxJsQQNztoywn50WbwlybRXdjcTBjsggPUErNhn/dpVUlhToKioP0grabLOPMLl9Aup+3Tngo/iUgDy/xBlOlIkCLn0w8yoH0Y7yFk+KF6ePlAcAKd+4T1X3BgJK5FJQPtD7sv5oNsumEZliRlz9v+Vg7ZDGxO3s5n44P8LXyggJnyLvP5eIpEXvQg+7a99ytfutKPmTs7W5Ko54cv0YUD0xobKO+RoYLBxG7cWYK5L9BPilaAhi0mVz1ywHz/NN/MQMlMMbk7ic+qptLxfmzXKaphaxKUf8aS37SXxIEzk7zmzCkeYNDkGsj8/ztb2Ca+jnOcTIjBYq9SR0FR/6uByXxZTwuLyl1sxmroKrgPV/w81yrWK9zCJL7PrnmVDDv8PCnk+1ch8uU=) 2025-08-29 14:24:08.620401 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIwyz+5XvZBCUHwXTWAywv6lovnPsfu5BiXObWq+hvC55L6PTjchT3t6PZ3X0D0vFwzYvRQuIa7sNjny0/PUWCQ=) 2025-08-29 14:24:12.802200 | orchestrator | 2025-08-29 14:24:12.802326 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:12.802342 | orchestrator | Friday 29 August 2025 14:24:08 +0000 (0:00:01.024) 0:00:24.230 ********* 2025-08-29 14:24:12.802374 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMbTlef8BFEjgAlOckAxO8OtoAth4BHBjVi/z3fArdNhZRkgrnxF3+IuctoehaoIDp5ccNcrxmOlXXYhqu5F/oQ=) 2025-08-29 14:24:12.802390 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdrK3w8EoOeY99ARyBuJciOorwR4dED+jlW7JRzuBpnNmo2PGaBmTnnWOidt3hqp/pUDeqy3y1Ut5HDZk3WIPof82He5jN9tTlvNL6D3fohalNEPHkVBin6vJHuWYWz5N3SvZuGyEZR+CoWOeJpGzPy0jl/StT1jjxy19e78RxdDkmo7S6UJe+PoiBPp1PpHEmLSuJEkHW4aoUZPBd+Brg3qaCvCAuJQEuoI8tzOc2Md767VOgffsBXKIamG7dguPau9Sc4Cipap+xRJPsTAni9weRzMsU6pILHXF+olI2SvsFGcQEjkDcRG3NMLxi7PrQFkQcR/PiB43C08Lnvbt0ShLZyb9oFNMSLYAJBCb8STFtNiGE5cSToBMmP3KRetItNHzu9gd5CtRxQh5u5lU08Eapf5Ml6PGa6zzuUNHSjd8QvLdZYRyx2LPaRBH5DC/95thCqENoImr+kP6JsZRndjOyUTfskFZ8TX7fnbNvsPfW5QV+QRzMAwDPtciPIhk=) 2025-08-29 14:24:12.802404 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILqJ+tbhDyLRLgHQ+LgjeS8o1SZq8/Herc/5ArNFRgXT) 2025-08-29 14:24:12.802416 | orchestrator | 2025-08-29 14:24:12.802426 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:12.802435 | orchestrator | Friday 29 August 2025 14:24:09 +0000 (0:00:01.044) 0:00:25.275 ********* 2025-08-29 14:24:12.802445 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAzDK9NxWF3/EZGnlspr2gwywjUlZ2ye8scKR4Wem0WZjxlhe7nbj82nMJn+m2C66TBjd6IJHerkOh/+/brMs+8=) 2025-08-29 14:24:12.802455 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJfvVbGpsWFQ0sFUGTYvYQsLPPE0F8GI6Ez13Kbs4qkY3YuxmFtW2gXAYchowKEaFC9SWQczEz48jcpTFTJlsDaL7DtgKaPbxzMuKdyJKhm/zh4S4ejw0LN/8oihIBvekl8NxgD+wvtckqx/Kf0WVYdTMgeg9rFtCQdBa7KETy641Ld9LBFGrD/m3V7djC7s9fHFDGK0qXMx3o0bk4g5wC0oM2Ao7qIn1MGtJ6DDvzHBYJYmDalwZKGCgMEjfFPats4MSTxoMPUgUnYftLqOic6Ngt/jE5nhJetRWlSSUyG+fuHFsL6MrIV2/lWtdnmgxtbN5pCagvHLWGmTVE0ie+dGmR+mHZg2CHzp7uw1i66ZCWkK9YpT1kU+34qqCEcmBKpTYVLp/5pUkpGrCcLLF2cRaAm4u/aftKBMlvcQsEtq46WMIW5sf1HbImwar6lnTht+hDVs7wUKst+Yo3R2MNxbA5TxamTqm61FiITzv2YlGYjYUT73N/kROYT/qrynk=) 2025-08-29 14:24:12.802490 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC6ogOG7GnJK5GUOinajinCPOMyl9nvoIdFN+3bQ3dHf) 2025-08-29 14:24:12.802501 | orchestrator | 2025-08-29 14:24:12.802510 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:24:12.802520 | orchestrator | Friday 29 August 2025 14:24:10 +0000 (0:00:01.053) 0:00:26.329 ********* 2025-08-29 14:24:12.802529 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCl6gdLPyvjcJD0M12orN3199p+VsLpEQnC9Rvd6Mzmaj5MAULJChwnhsWEfsAtfDIxPd8iXT7EztU5K2oCEiGU=) 2025-08-29 14:24:12.802539 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfbxTJTU9WMJ8nWYPx38T4fKBGaQl6ihwy++vFyPEMR) 2025-08-29 14:24:12.802549 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC249Qz0zSNQnKDpcKfzP0FIR7PfcvKUmWzNzZ60ZqkOcqyQ6XKyW6bCwZ0QZVWU74kwdZ5cBqx4K7W5hkVYaPNxfHmb6DlMifBezpfKiWeiY2DJZ/8AHlfhVbPBYrXSuVLPM3dvEAgXJhi9UQN/GEbO4ZolQWreGmRKaDGZyAt9XVFyR0ZSILtjNi0AevJliWcvV6HdsMnjk5MQ3ahxC25aeJjE8XOKv3iaPonQsBGE0vkaQ4HlpAi5gIw98amRfEKTt66LyGDL8x6XREoqNjkeiG+O3C5f5Ar7O+8Qr9CHbhldDlWnAMp6Q+tvBh+jXziNmqLpovc050IwdesFoRu4KnWkrjcYrI1zypE+l7oRejtdIl/e5dX25RvDU6voeIjjzwOz+/yAZHwgenE1jTeZafgy3/uLUcM3n/RdR/yuXO1ZigHF60tshK5tEv2UUS9Duq/70/ITbRmkQzF6HpzrxoSFN+kSgQBJ1UIt1HAiAapywyGH73t6ax9KTZcDr8=) 2025-08-29 14:24:12.802559 | orchestrator | 2025-08-29 14:24:12.802569 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-08-29 14:24:12.802578 | orchestrator | Friday 29 August 2025 14:24:11 +0000 (0:00:01.105) 0:00:27.435 ********* 2025-08-29 14:24:12.802589 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 14:24:12.802599 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 14:24:12.802608 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 14:24:12.802618 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 14:24:12.802627 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 14:24:12.802637 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 14:24:12.802646 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 14:24:12.802656 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:24:12.802666 | orchestrator | 2025-08-29 14:24:12.802698 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-08-29 14:24:12.802709 | orchestrator | Friday 29 August 2025 14:24:11 +0000 (0:00:00.160) 0:00:27.595 ********* 2025-08-29 14:24:12.802720 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:24:12.802731 | orchestrator | 2025-08-29 14:24:12.802749 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-08-29 14:24:12.802760 | orchestrator | Friday 29 August 2025 14:24:12 +0000 (0:00:00.042) 0:00:27.637 ********* 2025-08-29 14:24:12.802771 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:24:12.802782 | orchestrator | 2025-08-29 14:24:12.802792 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-08-29 14:24:12.802803 | orchestrator | Friday 29 August 2025 14:24:12 +0000 (0:00:00.052) 0:00:27.690 ********* 2025-08-29 14:24:12.802814 | orchestrator | changed: [testbed-manager] 2025-08-29 14:24:12.802824 | orchestrator | 2025-08-29 14:24:12.802835 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:24:12.802846 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:24:12.802858 | orchestrator | 2025-08-29 14:24:12.802868 | orchestrator | 2025-08-29 14:24:12.802879 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:24:12.802897 | orchestrator | Friday 29 August 2025 14:24:12 +0000 (0:00:00.477) 0:00:28.168 ********* 2025-08-29 14:24:12.802907 | orchestrator | =============================================================================== 2025-08-29 14:24:12.802918 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 7.02s 2025-08-29 14:24:12.802929 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.26s 2025-08-29 14:24:12.802961 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-08-29 14:24:12.802973 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-08-29 14:24:12.802983 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-08-29 14:24:12.802994 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-08-29 14:24:12.803005 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-08-29 14:24:12.803016 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-08-29 14:24:12.803026 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-08-29 14:24:12.803036 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-08-29 14:24:12.803045 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-08-29 14:24:12.803054 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-08-29 14:24:12.803064 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-08-29 14:24:12.803073 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-08-29 14:24:12.803082 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-08-29 14:24:12.803092 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-08-29 14:24:12.803101 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2025-08-29 14:24:12.803110 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-08-29 14:24:12.803120 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-08-29 14:24:12.803130 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-08-29 14:24:13.103062 | orchestrator | + osism apply squid 2025-08-29 14:24:25.014656 | orchestrator | 2025-08-29 14:24:25 | INFO  | Task 7026809a-c832-4f1b-be5e-266e61fc30cb (squid) was prepared for execution. 2025-08-29 14:24:25.014787 | orchestrator | 2025-08-29 14:24:25 | INFO  | It takes a moment until task 7026809a-c832-4f1b-be5e-266e61fc30cb (squid) has been started and output is visible here. 2025-08-29 14:26:18.046628 | orchestrator | 2025-08-29 14:26:18.046787 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-08-29 14:26:18.046804 | orchestrator | 2025-08-29 14:26:18.046815 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-08-29 14:26:18.046826 | orchestrator | Friday 29 August 2025 14:24:28 +0000 (0:00:00.162) 0:00:00.162 ********* 2025-08-29 14:26:18.046837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:26:18.046848 | orchestrator | 2025-08-29 14:26:18.046910 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-08-29 14:26:18.046922 | orchestrator | Friday 29 August 2025 14:24:29 +0000 (0:00:00.098) 0:00:00.260 ********* 2025-08-29 14:26:18.046933 | orchestrator | ok: [testbed-manager] 2025-08-29 14:26:18.046946 | orchestrator | 2025-08-29 14:26:18.046956 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-08-29 14:26:18.046967 | orchestrator | Friday 29 August 2025 14:24:30 +0000 (0:00:01.521) 0:00:01.782 ********* 2025-08-29 14:26:18.046978 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-08-29 14:26:18.047011 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-08-29 14:26:18.047022 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-08-29 14:26:18.047032 | orchestrator | 2025-08-29 14:26:18.047042 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-08-29 14:26:18.047051 | orchestrator | Friday 29 August 2025 14:24:31 +0000 (0:00:01.143) 0:00:02.926 ********* 2025-08-29 14:26:18.047061 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-08-29 14:26:18.047071 | orchestrator | 2025-08-29 14:26:18.047081 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-08-29 14:26:18.047090 | orchestrator | Friday 29 August 2025 14:24:32 +0000 (0:00:01.075) 0:00:04.001 ********* 2025-08-29 14:26:18.047099 | orchestrator | ok: [testbed-manager] 2025-08-29 14:26:18.047109 | orchestrator | 2025-08-29 14:26:18.047119 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-08-29 14:26:18.047130 | orchestrator | Friday 29 August 2025 14:24:33 +0000 (0:00:00.347) 0:00:04.349 ********* 2025-08-29 14:26:18.047141 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:18.047152 | orchestrator | 2025-08-29 14:26:18.047163 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-08-29 14:26:18.047174 | orchestrator | Friday 29 August 2025 14:24:34 +0000 (0:00:00.917) 0:00:05.266 ********* 2025-08-29 14:26:18.047184 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-08-29 14:26:18.047196 | orchestrator | ok: [testbed-manager] 2025-08-29 14:26:18.047207 | orchestrator | 2025-08-29 14:26:18.047217 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-08-29 14:26:18.047229 | orchestrator | Friday 29 August 2025 14:25:05 +0000 (0:00:31.072) 0:00:36.339 ********* 2025-08-29 14:26:18.047240 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:18.047250 | orchestrator | 2025-08-29 14:26:18.047261 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-08-29 14:26:18.047273 | orchestrator | Friday 29 August 2025 14:25:16 +0000 (0:00:11.850) 0:00:48.189 ********* 2025-08-29 14:26:18.047284 | orchestrator | Pausing for 60 seconds 2025-08-29 14:26:18.047295 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:18.047305 | orchestrator | 2025-08-29 14:26:18.047316 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-08-29 14:26:18.047329 | orchestrator | Friday 29 August 2025 14:26:17 +0000 (0:01:00.071) 0:01:48.261 ********* 2025-08-29 14:26:18.047340 | orchestrator | ok: [testbed-manager] 2025-08-29 14:26:18.047351 | orchestrator | 2025-08-29 14:26:18.047362 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-08-29 14:26:18.047373 | orchestrator | Friday 29 August 2025 14:26:17 +0000 (0:00:00.071) 0:01:48.332 ********* 2025-08-29 14:26:18.047384 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:18.047395 | orchestrator | 2025-08-29 14:26:18.047406 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:26:18.047416 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:26:18.047427 | orchestrator | 2025-08-29 14:26:18.047438 | orchestrator | 2025-08-29 14:26:18.047449 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:26:18.047460 | orchestrator | Friday 29 August 2025 14:26:17 +0000 (0:00:00.660) 0:01:48.992 ********* 2025-08-29 14:26:18.047471 | orchestrator | =============================================================================== 2025-08-29 14:26:18.047481 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-08-29 14:26:18.047490 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.07s 2025-08-29 14:26:18.047499 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.85s 2025-08-29 14:26:18.047509 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.52s 2025-08-29 14:26:18.047538 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.14s 2025-08-29 14:26:18.047548 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.08s 2025-08-29 14:26:18.047558 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2025-08-29 14:26:18.047568 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2025-08-29 14:26:18.047577 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-08-29 14:26:18.047586 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-08-29 14:26:18.047596 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-08-29 14:26:18.367589 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-08-29 14:26:18.368182 | orchestrator | ++ semver latest 9.0.0 2025-08-29 14:26:18.425446 | orchestrator | + [[ -1 -lt 0 ]] 2025-08-29 14:26:18.425541 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-08-29 14:26:18.425673 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-08-29 14:26:30.422946 | orchestrator | 2025-08-29 14:26:30 | INFO  | Task d33ed2f3-8a6c-4c7a-8022-1170166f5aff (operator) was prepared for execution. 2025-08-29 14:26:30.423072 | orchestrator | 2025-08-29 14:26:30 | INFO  | It takes a moment until task d33ed2f3-8a6c-4c7a-8022-1170166f5aff (operator) has been started and output is visible here. 2025-08-29 14:26:45.987162 | orchestrator | 2025-08-29 14:26:45.987287 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-08-29 14:26:45.987305 | orchestrator | 2025-08-29 14:26:45.987317 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:26:45.987328 | orchestrator | Friday 29 August 2025 14:26:34 +0000 (0:00:00.154) 0:00:00.154 ********* 2025-08-29 14:26:45.987339 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:26:45.987352 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:26:45.987363 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:26:45.987373 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:26:45.987384 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:26:45.987394 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:26:45.987405 | orchestrator | 2025-08-29 14:26:45.987416 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-08-29 14:26:45.987427 | orchestrator | Friday 29 August 2025 14:26:37 +0000 (0:00:03.455) 0:00:03.609 ********* 2025-08-29 14:26:45.987438 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:26:45.987448 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:26:45.987459 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:26:45.987469 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:26:45.987479 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:26:45.987507 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:26:45.987519 | orchestrator | 2025-08-29 14:26:45.987530 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-08-29 14:26:45.987541 | orchestrator | 2025-08-29 14:26:45.987552 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 14:26:45.987563 | orchestrator | Friday 29 August 2025 14:26:38 +0000 (0:00:00.759) 0:00:04.369 ********* 2025-08-29 14:26:45.987573 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:26:45.987584 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:26:45.987594 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:26:45.987605 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:26:45.987615 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:26:45.987626 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:26:45.987636 | orchestrator | 2025-08-29 14:26:45.987647 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 14:26:45.987658 | orchestrator | Friday 29 August 2025 14:26:38 +0000 (0:00:00.158) 0:00:04.527 ********* 2025-08-29 14:26:45.987668 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:26:45.987679 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:26:45.987689 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:26:45.987700 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:26:45.987710 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:26:45.987744 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:26:45.987756 | orchestrator | 2025-08-29 14:26:45.987766 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 14:26:45.987777 | orchestrator | Friday 29 August 2025 14:26:38 +0000 (0:00:00.153) 0:00:04.680 ********* 2025-08-29 14:26:45.987788 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:26:45.987799 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:26:45.987810 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:26:45.987820 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:26:45.987830 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:26:45.987841 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:26:45.987888 | orchestrator | 2025-08-29 14:26:45.987899 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 14:26:45.987910 | orchestrator | Friday 29 August 2025 14:26:39 +0000 (0:00:00.603) 0:00:05.284 ********* 2025-08-29 14:26:45.987922 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:26:45.987932 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:26:45.987943 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:26:45.987954 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:26:45.987964 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:26:45.987974 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:26:45.987985 | orchestrator | 2025-08-29 14:26:45.987995 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 14:26:45.988006 | orchestrator | Friday 29 August 2025 14:26:40 +0000 (0:00:00.867) 0:00:06.152 ********* 2025-08-29 14:26:45.988017 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-08-29 14:26:45.988028 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-08-29 14:26:45.988038 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-08-29 14:26:45.988049 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-08-29 14:26:45.988059 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-08-29 14:26:45.988070 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-08-29 14:26:45.988080 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-08-29 14:26:45.988091 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-08-29 14:26:45.988101 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-08-29 14:26:45.988112 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-08-29 14:26:45.988122 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-08-29 14:26:45.988132 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-08-29 14:26:45.988143 | orchestrator | 2025-08-29 14:26:45.988154 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 14:26:45.988165 | orchestrator | Friday 29 August 2025 14:26:41 +0000 (0:00:01.145) 0:00:07.298 ********* 2025-08-29 14:26:45.988175 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:26:45.988186 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:26:45.988196 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:26:45.988207 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:26:45.988217 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:26:45.988228 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:26:45.988238 | orchestrator | 2025-08-29 14:26:45.988249 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 14:26:45.988260 | orchestrator | Friday 29 August 2025 14:26:42 +0000 (0:00:01.285) 0:00:08.583 ********* 2025-08-29 14:26:45.988271 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-08-29 14:26:45.988281 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-08-29 14:26:45.988292 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-08-29 14:26:45.988303 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:26:45.988331 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:26:45.988343 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:26:45.988362 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:26:45.988373 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:26:45.988384 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:26:45.988394 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-08-29 14:26:45.988405 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-08-29 14:26:45.988415 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-08-29 14:26:45.988426 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-08-29 14:26:45.988436 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-08-29 14:26:45.988446 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-08-29 14:26:45.988457 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:26:45.988468 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:26:45.988478 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:26:45.988489 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:26:45.988499 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:26:45.988510 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:26:45.988520 | orchestrator | 2025-08-29 14:26:45.988531 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 14:26:45.988542 | orchestrator | Friday 29 August 2025 14:26:43 +0000 (0:00:01.214) 0:00:09.798 ********* 2025-08-29 14:26:45.988553 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:26:45.988564 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:26:45.988574 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:26:45.988584 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:26:45.988595 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:26:45.988605 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:26:45.988615 | orchestrator | 2025-08-29 14:26:45.988626 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 14:26:45.988636 | orchestrator | Friday 29 August 2025 14:26:44 +0000 (0:00:00.149) 0:00:09.948 ********* 2025-08-29 14:26:45.988647 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:26:45.988657 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:26:45.988667 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:26:45.988678 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:26:45.988688 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:26:45.988699 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:26:45.988709 | orchestrator | 2025-08-29 14:26:45.988720 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 14:26:45.988730 | orchestrator | Friday 29 August 2025 14:26:44 +0000 (0:00:00.566) 0:00:10.514 ********* 2025-08-29 14:26:45.988749 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:26:45.988761 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:26:45.988771 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:26:45.988782 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:26:45.988792 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:26:45.988803 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:26:45.988813 | orchestrator | 2025-08-29 14:26:45.988824 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 14:26:45.988835 | orchestrator | Friday 29 August 2025 14:26:44 +0000 (0:00:00.176) 0:00:10.691 ********* 2025-08-29 14:26:45.988845 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 14:26:45.988873 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 14:26:45.988883 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:26:45.988894 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 14:26:45.988905 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:26:45.988927 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:26:45.988938 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 14:26:45.988949 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:26:45.988959 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 14:26:45.988970 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 14:26:45.988980 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:26:45.988991 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:26:45.989001 | orchestrator | 2025-08-29 14:26:45.989012 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 14:26:45.989022 | orchestrator | Friday 29 August 2025 14:26:45 +0000 (0:00:00.674) 0:00:11.366 ********* 2025-08-29 14:26:45.989033 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:26:45.989044 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:26:45.989054 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:26:45.989064 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:26:45.989075 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:26:45.989085 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:26:45.989096 | orchestrator | 2025-08-29 14:26:45.989106 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 14:26:45.989117 | orchestrator | Friday 29 August 2025 14:26:45 +0000 (0:00:00.136) 0:00:11.502 ********* 2025-08-29 14:26:45.989127 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:26:45.989138 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:26:45.989148 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:26:45.989159 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:26:45.989169 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:26:45.989180 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:26:45.989190 | orchestrator | 2025-08-29 14:26:45.989201 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 14:26:45.989211 | orchestrator | Friday 29 August 2025 14:26:45 +0000 (0:00:00.143) 0:00:11.646 ********* 2025-08-29 14:26:45.989222 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:26:45.989232 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:26:45.989243 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:26:45.989253 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:26:45.989271 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:26:47.198926 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:26:47.199005 | orchestrator | 2025-08-29 14:26:47.199013 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 14:26:47.199019 | orchestrator | Friday 29 August 2025 14:26:45 +0000 (0:00:00.149) 0:00:11.795 ********* 2025-08-29 14:26:47.199024 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:26:47.199029 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:26:47.199034 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:26:47.199039 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:26:47.199044 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:26:47.199048 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:26:47.199052 | orchestrator | 2025-08-29 14:26:47.199057 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 14:26:47.199062 | orchestrator | Friday 29 August 2025 14:26:46 +0000 (0:00:00.667) 0:00:12.463 ********* 2025-08-29 14:26:47.199066 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:26:47.199071 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:26:47.199076 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:26:47.199080 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:26:47.199098 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:26:47.199103 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:26:47.199107 | orchestrator | 2025-08-29 14:26:47.199112 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:26:47.199117 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:26:47.199137 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:26:47.199142 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:26:47.199146 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:26:47.199150 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:26:47.199155 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:26:47.199159 | orchestrator | 2025-08-29 14:26:47.199164 | orchestrator | 2025-08-29 14:26:47.199168 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:26:47.199173 | orchestrator | Friday 29 August 2025 14:26:46 +0000 (0:00:00.266) 0:00:12.729 ********* 2025-08-29 14:26:47.199178 | orchestrator | =============================================================================== 2025-08-29 14:26:47.199182 | orchestrator | Gathering Facts --------------------------------------------------------- 3.46s 2025-08-29 14:26:47.199187 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.29s 2025-08-29 14:26:47.199191 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.21s 2025-08-29 14:26:47.199196 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2025-08-29 14:26:47.199201 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.87s 2025-08-29 14:26:47.199205 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2025-08-29 14:26:47.199210 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.67s 2025-08-29 14:26:47.199214 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-08-29 14:26:47.199218 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-08-29 14:26:47.199223 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-08-29 14:26:47.199227 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.27s 2025-08-29 14:26:47.199232 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2025-08-29 14:26:47.199236 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-08-29 14:26:47.199241 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2025-08-29 14:26:47.199246 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-08-29 14:26:47.199250 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2025-08-29 14:26:47.199255 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-08-29 14:26:47.199259 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-08-29 14:26:47.569097 | orchestrator | + osism apply --environment custom facts 2025-08-29 14:26:49.437576 | orchestrator | 2025-08-29 14:26:49 | INFO  | Trying to run play facts in environment custom 2025-08-29 14:26:59.612181 | orchestrator | 2025-08-29 14:26:59 | INFO  | Task 0125709d-019b-4d43-b32f-1335d5820110 (facts) was prepared for execution. 2025-08-29 14:26:59.612312 | orchestrator | 2025-08-29 14:26:59 | INFO  | It takes a moment until task 0125709d-019b-4d43-b32f-1335d5820110 (facts) has been started and output is visible here. 2025-08-29 14:27:43.568893 | orchestrator | 2025-08-29 14:27:43.569003 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-08-29 14:27:43.569014 | orchestrator | 2025-08-29 14:27:43.569043 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 14:27:43.569052 | orchestrator | Friday 29 August 2025 14:27:03 +0000 (0:00:00.086) 0:00:00.086 ********* 2025-08-29 14:27:43.569060 | orchestrator | ok: [testbed-manager] 2025-08-29 14:27:43.569070 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:27:43.569078 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:27:43.569086 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:27:43.569094 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:27:43.569101 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:27:43.569109 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:27:43.569116 | orchestrator | 2025-08-29 14:27:43.569124 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-08-29 14:27:43.569132 | orchestrator | Friday 29 August 2025 14:27:04 +0000 (0:00:01.332) 0:00:01.419 ********* 2025-08-29 14:27:43.569140 | orchestrator | ok: [testbed-manager] 2025-08-29 14:27:43.569148 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:27:43.569155 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:27:43.569164 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:27:43.569171 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:27:43.569179 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:27:43.569187 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:27:43.569194 | orchestrator | 2025-08-29 14:27:43.569202 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-08-29 14:27:43.569210 | orchestrator | 2025-08-29 14:27:43.569218 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 14:27:43.569226 | orchestrator | Friday 29 August 2025 14:27:05 +0000 (0:00:01.144) 0:00:02.563 ********* 2025-08-29 14:27:43.569233 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:27:43.569241 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:27:43.569249 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:27:43.569257 | orchestrator | 2025-08-29 14:27:43.569265 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 14:27:43.569273 | orchestrator | Friday 29 August 2025 14:27:06 +0000 (0:00:00.133) 0:00:02.697 ********* 2025-08-29 14:27:43.569281 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:27:43.569289 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:27:43.569296 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:27:43.569304 | orchestrator | 2025-08-29 14:27:43.569312 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 14:27:43.569319 | orchestrator | Friday 29 August 2025 14:27:06 +0000 (0:00:00.210) 0:00:02.909 ********* 2025-08-29 14:27:43.569327 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:27:43.569335 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:27:43.569342 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:27:43.569350 | orchestrator | 2025-08-29 14:27:43.569358 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 14:27:43.569366 | orchestrator | Friday 29 August 2025 14:27:06 +0000 (0:00:00.195) 0:00:03.104 ********* 2025-08-29 14:27:43.569392 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:27:43.569404 | orchestrator | 2025-08-29 14:27:43.569413 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 14:27:43.569421 | orchestrator | Friday 29 August 2025 14:27:06 +0000 (0:00:00.139) 0:00:03.244 ********* 2025-08-29 14:27:43.569430 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:27:43.569439 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:27:43.569447 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:27:43.569456 | orchestrator | 2025-08-29 14:27:43.569464 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 14:27:43.569473 | orchestrator | Friday 29 August 2025 14:27:07 +0000 (0:00:00.439) 0:00:03.683 ********* 2025-08-29 14:27:43.569481 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:27:43.569490 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:27:43.569504 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:27:43.569513 | orchestrator | 2025-08-29 14:27:43.569522 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 14:27:43.569530 | orchestrator | Friday 29 August 2025 14:27:07 +0000 (0:00:00.103) 0:00:03.787 ********* 2025-08-29 14:27:43.569539 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:27:43.569548 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:27:43.569556 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:27:43.569564 | orchestrator | 2025-08-29 14:27:43.569573 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 14:27:43.569582 | orchestrator | Friday 29 August 2025 14:27:08 +0000 (0:00:01.043) 0:00:04.830 ********* 2025-08-29 14:27:43.569591 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:27:43.569599 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:27:43.569608 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:27:43.569616 | orchestrator | 2025-08-29 14:27:43.569625 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 14:27:43.569634 | orchestrator | Friday 29 August 2025 14:27:08 +0000 (0:00:00.480) 0:00:05.311 ********* 2025-08-29 14:27:43.569643 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:27:43.569651 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:27:43.569660 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:27:43.569670 | orchestrator | 2025-08-29 14:27:43.569678 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 14:27:43.569687 | orchestrator | Friday 29 August 2025 14:27:09 +0000 (0:00:01.030) 0:00:06.342 ********* 2025-08-29 14:27:43.569696 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:27:43.569704 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:27:43.569713 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:27:43.569722 | orchestrator | 2025-08-29 14:27:43.569731 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-08-29 14:27:43.569739 | orchestrator | Friday 29 August 2025 14:27:27 +0000 (0:00:17.697) 0:00:24.039 ********* 2025-08-29 14:27:43.569748 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:27:43.569756 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:27:43.569764 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:27:43.569772 | orchestrator | 2025-08-29 14:27:43.569780 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-08-29 14:27:43.569802 | orchestrator | Friday 29 August 2025 14:27:27 +0000 (0:00:00.087) 0:00:24.126 ********* 2025-08-29 14:27:43.569811 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:27:43.569835 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:27:43.569843 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:27:43.569851 | orchestrator | 2025-08-29 14:27:43.569858 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 14:27:43.569866 | orchestrator | Friday 29 August 2025 14:27:34 +0000 (0:00:07.286) 0:00:31.413 ********* 2025-08-29 14:27:43.569874 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:27:43.569882 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:27:43.569889 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:27:43.569897 | orchestrator | 2025-08-29 14:27:43.569905 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 14:27:43.569912 | orchestrator | Friday 29 August 2025 14:27:35 +0000 (0:00:00.420) 0:00:31.834 ********* 2025-08-29 14:27:43.569920 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-08-29 14:27:43.569928 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-08-29 14:27:43.569940 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-08-29 14:27:43.569948 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-08-29 14:27:43.569956 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-08-29 14:27:43.569964 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-08-29 14:27:43.569971 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-08-29 14:27:43.569986 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-08-29 14:27:43.569993 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-08-29 14:27:43.570001 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-08-29 14:27:43.570009 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-08-29 14:27:43.570070 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-08-29 14:27:43.570080 | orchestrator | 2025-08-29 14:27:43.570088 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 14:27:43.570095 | orchestrator | Friday 29 August 2025 14:27:38 +0000 (0:00:03.441) 0:00:35.276 ********* 2025-08-29 14:27:43.570103 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:27:43.570111 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:27:43.570119 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:27:43.570126 | orchestrator | 2025-08-29 14:27:43.570134 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:27:43.570142 | orchestrator | 2025-08-29 14:27:43.570150 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:27:43.570157 | orchestrator | Friday 29 August 2025 14:27:39 +0000 (0:00:01.168) 0:00:36.444 ********* 2025-08-29 14:27:43.570165 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:27:43.570173 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:27:43.570180 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:27:43.570188 | orchestrator | ok: [testbed-manager] 2025-08-29 14:27:43.570196 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:27:43.570203 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:27:43.570211 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:27:43.570219 | orchestrator | 2025-08-29 14:27:43.570226 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:27:43.570235 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:27:43.570243 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:27:43.570253 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:27:43.570261 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:27:43.570269 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:27:43.570277 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:27:43.570285 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:27:43.570292 | orchestrator | 2025-08-29 14:27:43.570300 | orchestrator | 2025-08-29 14:27:43.570308 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:27:43.570316 | orchestrator | Friday 29 August 2025 14:27:43 +0000 (0:00:03.771) 0:00:40.216 ********* 2025-08-29 14:27:43.570324 | orchestrator | =============================================================================== 2025-08-29 14:27:43.570331 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.70s 2025-08-29 14:27:43.570339 | orchestrator | Install required packages (Debian) -------------------------------------- 7.29s 2025-08-29 14:27:43.570347 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.77s 2025-08-29 14:27:43.570354 | orchestrator | Copy fact files --------------------------------------------------------- 3.44s 2025-08-29 14:27:43.570362 | orchestrator | Create custom facts directory ------------------------------------------- 1.33s 2025-08-29 14:27:43.570375 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.17s 2025-08-29 14:27:43.570388 | orchestrator | Copy fact file ---------------------------------------------------------- 1.14s 2025-08-29 14:27:43.796431 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2025-08-29 14:27:43.796522 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2025-08-29 14:27:43.796531 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-08-29 14:27:43.796539 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-08-29 14:27:43.796547 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-08-29 14:27:43.796555 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-08-29 14:27:43.796564 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-08-29 14:27:43.796572 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-08-29 14:27:43.796582 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2025-08-29 14:27:43.796590 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-08-29 14:27:43.796598 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2025-08-29 14:27:44.136513 | orchestrator | + osism apply bootstrap 2025-08-29 14:27:56.110676 | orchestrator | 2025-08-29 14:27:56 | INFO  | Task 903527ab-56bd-4f3d-8d1a-e84684b1484a (bootstrap) was prepared for execution. 2025-08-29 14:27:56.110796 | orchestrator | 2025-08-29 14:27:56 | INFO  | It takes a moment until task 903527ab-56bd-4f3d-8d1a-e84684b1484a (bootstrap) has been started and output is visible here. 2025-08-29 14:28:11.726566 | orchestrator | 2025-08-29 14:28:11.726678 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-08-29 14:28:11.726695 | orchestrator | 2025-08-29 14:28:11.726707 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-08-29 14:28:11.726718 | orchestrator | Friday 29 August 2025 14:27:59 +0000 (0:00:00.151) 0:00:00.151 ********* 2025-08-29 14:28:11.726729 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:11.726741 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:11.726752 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:11.726763 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:11.726773 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:11.726784 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:11.726794 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:11.726858 | orchestrator | 2025-08-29 14:28:11.726871 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:28:11.726881 | orchestrator | 2025-08-29 14:28:11.726892 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:28:11.726903 | orchestrator | Friday 29 August 2025 14:28:00 +0000 (0:00:00.212) 0:00:00.363 ********* 2025-08-29 14:28:11.726913 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:11.726924 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:11.726934 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:11.726945 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:11.726955 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:11.726965 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:11.726976 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:11.726986 | orchestrator | 2025-08-29 14:28:11.726997 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-08-29 14:28:11.727007 | orchestrator | 2025-08-29 14:28:11.727018 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:28:11.727029 | orchestrator | Friday 29 August 2025 14:28:03 +0000 (0:00:03.886) 0:00:04.250 ********* 2025-08-29 14:28:11.727040 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 14:28:11.727051 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 14:28:11.727085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-08-29 14:28:11.727098 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 14:28:11.727109 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 14:28:11.727121 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 14:28:11.727133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 14:28:11.727164 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-08-29 14:28:11.727176 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 14:28:11.727188 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 14:28:11.727199 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 14:28:11.727211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 14:28:11.727223 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 14:28:11.727235 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 14:28:11.727246 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-08-29 14:28:11.727259 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 14:28:11.727271 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 14:28:11.727283 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 14:28:11.727294 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-08-29 14:28:11.727306 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 14:28:11.727318 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:28:11.727329 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 14:28:11.727341 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-08-29 14:28:11.727353 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-08-29 14:28:11.727364 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 14:28:11.727376 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 14:28:11.727388 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:28:11.727400 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-08-29 14:28:11.727412 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-08-29 14:28:11.727423 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:28:11.727435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 14:28:11.727448 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-08-29 14:28:11.727458 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-08-29 14:28:11.727468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 14:28:11.727479 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 14:28:11.727489 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-08-29 14:28:11.727499 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:28:11.727510 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-08-29 14:28:11.727520 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 14:28:11.727535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 14:28:11.727546 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 14:28:11.727556 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 14:28:11.727567 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 14:28:11.727577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 14:28:11.727587 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-08-29 14:28:11.727598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 14:28:11.727628 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-08-29 14:28:11.727647 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 14:28:11.727658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 14:28:11.727668 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:28:11.727679 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-08-29 14:28:11.727690 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:28:11.727700 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-08-29 14:28:11.727711 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-08-29 14:28:11.727721 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-08-29 14:28:11.727732 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:28:11.727742 | orchestrator | 2025-08-29 14:28:11.727753 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-08-29 14:28:11.727764 | orchestrator | 2025-08-29 14:28:11.727774 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-08-29 14:28:11.727785 | orchestrator | Friday 29 August 2025 14:28:04 +0000 (0:00:00.461) 0:00:04.712 ********* 2025-08-29 14:28:11.727795 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:11.727825 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:11.727836 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:11.727847 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:11.727857 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:11.727868 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:11.727878 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:11.727889 | orchestrator | 2025-08-29 14:28:11.727900 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-08-29 14:28:11.727910 | orchestrator | Friday 29 August 2025 14:28:05 +0000 (0:00:01.224) 0:00:05.937 ********* 2025-08-29 14:28:11.727921 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:11.727931 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:11.727942 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:11.727952 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:11.727963 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:11.727973 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:11.727983 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:11.727994 | orchestrator | 2025-08-29 14:28:11.728005 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-08-29 14:28:11.728015 | orchestrator | Friday 29 August 2025 14:28:06 +0000 (0:00:01.250) 0:00:07.188 ********* 2025-08-29 14:28:11.728027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:28:11.728040 | orchestrator | 2025-08-29 14:28:11.728051 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-08-29 14:28:11.728062 | orchestrator | Friday 29 August 2025 14:28:07 +0000 (0:00:00.253) 0:00:07.442 ********* 2025-08-29 14:28:11.728072 | orchestrator | changed: [testbed-manager] 2025-08-29 14:28:11.728083 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:28:11.728094 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:11.728104 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:28:11.728115 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:28:11.728125 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:11.728136 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:11.728146 | orchestrator | 2025-08-29 14:28:11.728156 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-08-29 14:28:11.728167 | orchestrator | Friday 29 August 2025 14:28:09 +0000 (0:00:02.102) 0:00:09.544 ********* 2025-08-29 14:28:11.728178 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:28:11.728189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:28:11.728208 | orchestrator | 2025-08-29 14:28:11.728219 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-08-29 14:28:11.728230 | orchestrator | Friday 29 August 2025 14:28:09 +0000 (0:00:00.284) 0:00:09.829 ********* 2025-08-29 14:28:11.728240 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:28:11.728251 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:28:11.728261 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:28:11.728272 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:11.728282 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:11.728293 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:11.728303 | orchestrator | 2025-08-29 14:28:11.728314 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-08-29 14:28:11.728324 | orchestrator | Friday 29 August 2025 14:28:10 +0000 (0:00:01.006) 0:00:10.835 ********* 2025-08-29 14:28:11.728335 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:28:11.728345 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:28:11.728356 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:11.728366 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:28:11.728377 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:11.728387 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:28:11.728397 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:11.728408 | orchestrator | 2025-08-29 14:28:11.728418 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-08-29 14:28:11.728429 | orchestrator | Friday 29 August 2025 14:28:11 +0000 (0:00:00.598) 0:00:11.433 ********* 2025-08-29 14:28:11.728444 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:28:11.728455 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:28:11.728466 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:28:11.728476 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:28:11.728486 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:28:11.728497 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:28:11.728507 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:11.728518 | orchestrator | 2025-08-29 14:28:11.728529 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 14:28:11.728540 | orchestrator | Friday 29 August 2025 14:28:11 +0000 (0:00:00.426) 0:00:11.860 ********* 2025-08-29 14:28:11.728550 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:28:11.728561 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:28:11.728579 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:28:23.535991 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:28:23.536105 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:28:23.536118 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:28:23.536128 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:28:23.536139 | orchestrator | 2025-08-29 14:28:23.536151 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 14:28:23.536162 | orchestrator | Friday 29 August 2025 14:28:11 +0000 (0:00:00.226) 0:00:12.086 ********* 2025-08-29 14:28:23.536174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:28:23.536197 | orchestrator | 2025-08-29 14:28:23.536208 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 14:28:23.536219 | orchestrator | Friday 29 August 2025 14:28:12 +0000 (0:00:00.325) 0:00:12.412 ********* 2025-08-29 14:28:23.536229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:28:23.536239 | orchestrator | 2025-08-29 14:28:23.536249 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 14:28:23.536258 | orchestrator | Friday 29 August 2025 14:28:12 +0000 (0:00:00.307) 0:00:12.720 ********* 2025-08-29 14:28:23.536289 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:23.536301 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:23.536310 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:23.536320 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:23.536329 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:23.536338 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:23.536348 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:23.536357 | orchestrator | 2025-08-29 14:28:23.536367 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 14:28:23.536377 | orchestrator | Friday 29 August 2025 14:28:13 +0000 (0:00:01.248) 0:00:13.968 ********* 2025-08-29 14:28:23.536386 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:28:23.536396 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:28:23.536405 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:28:23.536415 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:28:23.536424 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:28:23.536434 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:28:23.536443 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:28:23.536452 | orchestrator | 2025-08-29 14:28:23.536462 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 14:28:23.536472 | orchestrator | Friday 29 August 2025 14:28:13 +0000 (0:00:00.233) 0:00:14.202 ********* 2025-08-29 14:28:23.536481 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:23.536490 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:23.536500 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:23.536510 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:23.536521 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:23.536531 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:23.536542 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:23.536552 | orchestrator | 2025-08-29 14:28:23.536563 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 14:28:23.536574 | orchestrator | Friday 29 August 2025 14:28:14 +0000 (0:00:00.604) 0:00:14.807 ********* 2025-08-29 14:28:23.536585 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:28:23.536595 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:28:23.536606 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:28:23.536616 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:28:23.536627 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:28:23.536637 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:28:23.536648 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:28:23.536659 | orchestrator | 2025-08-29 14:28:23.536669 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 14:28:23.536682 | orchestrator | Friday 29 August 2025 14:28:14 +0000 (0:00:00.260) 0:00:15.067 ********* 2025-08-29 14:28:23.536692 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:23.536703 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:28:23.536713 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:28:23.536724 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:28:23.536734 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:23.536745 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:23.536755 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:23.536765 | orchestrator | 2025-08-29 14:28:23.536776 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 14:28:23.536787 | orchestrator | Friday 29 August 2025 14:28:15 +0000 (0:00:00.523) 0:00:15.591 ********* 2025-08-29 14:28:23.536827 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:23.536839 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:28:23.536849 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:28:23.536860 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:28:23.536870 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:23.536879 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:23.536888 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:23.536897 | orchestrator | 2025-08-29 14:28:23.536907 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 14:28:23.536924 | orchestrator | Friday 29 August 2025 14:28:16 +0000 (0:00:01.111) 0:00:16.702 ********* 2025-08-29 14:28:23.536962 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:23.536973 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:23.536982 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:23.536992 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:23.537001 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:23.537011 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:23.537020 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:23.537029 | orchestrator | 2025-08-29 14:28:23.537040 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 14:28:23.537050 | orchestrator | Friday 29 August 2025 14:28:17 +0000 (0:00:01.073) 0:00:17.775 ********* 2025-08-29 14:28:23.537077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:28:23.537088 | orchestrator | 2025-08-29 14:28:23.537098 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 14:28:23.537108 | orchestrator | Friday 29 August 2025 14:28:17 +0000 (0:00:00.333) 0:00:18.109 ********* 2025-08-29 14:28:23.537117 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:28:23.537126 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:28:23.537136 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:28:23.537146 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:28:23.537155 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:28:23.537164 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:28:23.537174 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:28:23.537183 | orchestrator | 2025-08-29 14:28:23.537192 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 14:28:23.537202 | orchestrator | Friday 29 August 2025 14:28:19 +0000 (0:00:01.217) 0:00:19.327 ********* 2025-08-29 14:28:23.537211 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:23.537220 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:23.537230 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:23.537239 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:23.537248 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:23.537257 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:23.537267 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:23.537276 | orchestrator | 2025-08-29 14:28:23.537285 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 14:28:23.537295 | orchestrator | Friday 29 August 2025 14:28:19 +0000 (0:00:00.230) 0:00:19.558 ********* 2025-08-29 14:28:23.537304 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:23.537314 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:23.537323 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:23.537332 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:23.537341 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:23.537350 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:23.537360 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:23.537369 | orchestrator | 2025-08-29 14:28:23.537378 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 14:28:23.537388 | orchestrator | Friday 29 August 2025 14:28:19 +0000 (0:00:00.249) 0:00:19.807 ********* 2025-08-29 14:28:23.537397 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:23.537406 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:23.537416 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:23.537425 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:23.537434 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:23.537443 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:23.537452 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:23.537462 | orchestrator | 2025-08-29 14:28:23.537471 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 14:28:23.537481 | orchestrator | Friday 29 August 2025 14:28:19 +0000 (0:00:00.239) 0:00:20.046 ********* 2025-08-29 14:28:23.537497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:28:23.537509 | orchestrator | 2025-08-29 14:28:23.537519 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 14:28:23.537528 | orchestrator | Friday 29 August 2025 14:28:20 +0000 (0:00:00.298) 0:00:20.344 ********* 2025-08-29 14:28:23.537537 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:23.537547 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:23.537556 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:23.537565 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:23.537575 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:23.537584 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:23.537593 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:23.537603 | orchestrator | 2025-08-29 14:28:23.537612 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 14:28:23.537621 | orchestrator | Friday 29 August 2025 14:28:20 +0000 (0:00:00.549) 0:00:20.894 ********* 2025-08-29 14:28:23.537631 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:28:23.537640 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:28:23.537650 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:28:23.537659 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:28:23.537668 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:28:23.537678 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:28:23.537687 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:28:23.537696 | orchestrator | 2025-08-29 14:28:23.537706 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 14:28:23.537715 | orchestrator | Friday 29 August 2025 14:28:20 +0000 (0:00:00.230) 0:00:21.125 ********* 2025-08-29 14:28:23.537724 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:23.537734 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:28:23.537743 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:28:23.537752 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:23.537762 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:23.537771 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:28:23.537780 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:23.537790 | orchestrator | 2025-08-29 14:28:23.537816 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 14:28:23.537830 | orchestrator | Friday 29 August 2025 14:28:21 +0000 (0:00:01.064) 0:00:22.189 ********* 2025-08-29 14:28:23.537839 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:23.537849 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:28:23.537858 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:28:23.537868 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:28:23.537877 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:23.537886 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:28:23.537896 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:28:23.537905 | orchestrator | 2025-08-29 14:28:23.537915 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 14:28:23.537924 | orchestrator | Friday 29 August 2025 14:28:22 +0000 (0:00:00.568) 0:00:22.758 ********* 2025-08-29 14:28:23.537934 | orchestrator | ok: [testbed-manager] 2025-08-29 14:28:23.537943 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:28:23.537953 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:28:23.537962 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:28:23.537978 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:01.881202 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:01.881307 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:01.881321 | orchestrator | 2025-08-29 14:29:01.881333 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 14:29:01.881345 | orchestrator | Friday 29 August 2025 14:28:23 +0000 (0:00:01.054) 0:00:23.812 ********* 2025-08-29 14:29:01.881355 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:01.881365 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:01.881396 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:01.881406 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:01.881416 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:01.881425 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:01.881435 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:01.881444 | orchestrator | 2025-08-29 14:29:01.881454 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-08-29 14:29:01.881464 | orchestrator | Friday 29 August 2025 14:28:40 +0000 (0:00:16.515) 0:00:40.328 ********* 2025-08-29 14:29:01.881473 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:01.881483 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:01.881493 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:01.881502 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:01.881512 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:01.881521 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:01.881531 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:01.881540 | orchestrator | 2025-08-29 14:29:01.881550 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-08-29 14:29:01.881559 | orchestrator | Friday 29 August 2025 14:28:40 +0000 (0:00:00.228) 0:00:40.557 ********* 2025-08-29 14:29:01.881569 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:01.881579 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:01.881588 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:01.881597 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:01.881607 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:01.881616 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:01.881626 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:01.881635 | orchestrator | 2025-08-29 14:29:01.881645 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-08-29 14:29:01.881654 | orchestrator | Friday 29 August 2025 14:28:40 +0000 (0:00:00.215) 0:00:40.773 ********* 2025-08-29 14:29:01.881664 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:01.881673 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:01.881683 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:01.881692 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:01.881702 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:01.881711 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:01.881722 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:01.881733 | orchestrator | 2025-08-29 14:29:01.881744 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-08-29 14:29:01.881755 | orchestrator | Friday 29 August 2025 14:28:40 +0000 (0:00:00.222) 0:00:40.996 ********* 2025-08-29 14:29:01.881768 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:29:01.881804 | orchestrator | 2025-08-29 14:29:01.881815 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-08-29 14:29:01.881827 | orchestrator | Friday 29 August 2025 14:28:40 +0000 (0:00:00.289) 0:00:41.286 ********* 2025-08-29 14:29:01.881837 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:01.881849 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:01.881859 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:01.881870 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:01.881880 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:01.881891 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:01.881902 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:01.881913 | orchestrator | 2025-08-29 14:29:01.881922 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-08-29 14:29:01.881932 | orchestrator | Friday 29 August 2025 14:28:42 +0000 (0:00:01.459) 0:00:42.745 ********* 2025-08-29 14:29:01.881941 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:01.881951 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:01.881961 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:01.881970 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:01.881987 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:01.881997 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:01.882006 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:01.882067 | orchestrator | 2025-08-29 14:29:01.882079 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-08-29 14:29:01.882089 | orchestrator | Friday 29 August 2025 14:28:43 +0000 (0:00:00.934) 0:00:43.680 ********* 2025-08-29 14:29:01.882099 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:01.882108 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:01.882117 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:01.882127 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:01.882137 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:01.882146 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:01.882156 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:01.882165 | orchestrator | 2025-08-29 14:29:01.882175 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-08-29 14:29:01.882184 | orchestrator | Friday 29 August 2025 14:28:44 +0000 (0:00:00.741) 0:00:44.422 ********* 2025-08-29 14:29:01.882195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:29:01.882207 | orchestrator | 2025-08-29 14:29:01.882216 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-08-29 14:29:01.882227 | orchestrator | Friday 29 August 2025 14:28:44 +0000 (0:00:00.310) 0:00:44.733 ********* 2025-08-29 14:29:01.882236 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:01.882246 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:01.882255 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:01.882264 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:01.882274 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:01.882284 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:01.882293 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:01.882303 | orchestrator | 2025-08-29 14:29:01.882328 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-08-29 14:29:01.882338 | orchestrator | Friday 29 August 2025 14:28:45 +0000 (0:00:01.110) 0:00:45.843 ********* 2025-08-29 14:29:01.882348 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:29:01.882357 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:29:01.882367 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:29:01.882376 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:29:01.882386 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:29:01.882395 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:29:01.882405 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:29:01.882414 | orchestrator | 2025-08-29 14:29:01.882424 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-08-29 14:29:01.882434 | orchestrator | Friday 29 August 2025 14:28:45 +0000 (0:00:00.344) 0:00:46.188 ********* 2025-08-29 14:29:01.882443 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:01.882453 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:01.882462 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:01.882472 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:01.882481 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:01.882491 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:01.882501 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:01.882510 | orchestrator | 2025-08-29 14:29:01.882520 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-08-29 14:29:01.882529 | orchestrator | Friday 29 August 2025 14:28:56 +0000 (0:00:10.954) 0:00:57.142 ********* 2025-08-29 14:29:01.882539 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:01.882548 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:01.882558 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:01.882567 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:01.882577 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:01.882594 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:01.882603 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:01.882613 | orchestrator | 2025-08-29 14:29:01.882622 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-08-29 14:29:01.882632 | orchestrator | Friday 29 August 2025 14:28:57 +0000 (0:00:00.803) 0:00:57.945 ********* 2025-08-29 14:29:01.882642 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:01.882651 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:01.882661 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:01.882670 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:01.882679 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:01.882689 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:01.882698 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:01.882708 | orchestrator | 2025-08-29 14:29:01.882718 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-08-29 14:29:01.882727 | orchestrator | Friday 29 August 2025 14:28:58 +0000 (0:00:00.902) 0:00:58.848 ********* 2025-08-29 14:29:01.882737 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:01.882746 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:01.882756 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:01.882765 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:01.882775 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:01.882801 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:01.882811 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:01.882820 | orchestrator | 2025-08-29 14:29:01.882830 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-08-29 14:29:01.882839 | orchestrator | Friday 29 August 2025 14:28:58 +0000 (0:00:00.244) 0:00:59.093 ********* 2025-08-29 14:29:01.882849 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:01.882858 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:01.882867 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:01.882877 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:01.882886 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:01.882895 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:01.882904 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:01.882913 | orchestrator | 2025-08-29 14:29:01.882923 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-08-29 14:29:01.882932 | orchestrator | Friday 29 August 2025 14:28:59 +0000 (0:00:00.262) 0:00:59.355 ********* 2025-08-29 14:29:01.882958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:29:01.882969 | orchestrator | 2025-08-29 14:29:01.882979 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-08-29 14:29:01.882988 | orchestrator | Friday 29 August 2025 14:28:59 +0000 (0:00:00.300) 0:00:59.655 ********* 2025-08-29 14:29:01.882998 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:01.883008 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:01.883017 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:01.883027 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:01.883036 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:01.883046 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:01.883055 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:01.883064 | orchestrator | 2025-08-29 14:29:01.883074 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-08-29 14:29:01.883083 | orchestrator | Friday 29 August 2025 14:29:01 +0000 (0:00:01.703) 0:01:01.359 ********* 2025-08-29 14:29:01.883093 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:01.883102 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:01.883112 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:01.883121 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:01.883135 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:01.883145 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:01.883154 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:01.883170 | orchestrator | 2025-08-29 14:29:01.883180 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-08-29 14:29:01.883189 | orchestrator | Friday 29 August 2025 14:29:01 +0000 (0:00:00.551) 0:01:01.911 ********* 2025-08-29 14:29:01.883199 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:01.883208 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:01.883218 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:01.883227 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:01.883237 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:01.883246 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:01.883255 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:01.883265 | orchestrator | 2025-08-29 14:29:01.883280 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-08-29 14:31:21.446413 | orchestrator | Friday 29 August 2025 14:29:01 +0000 (0:00:00.251) 0:01:02.162 ********* 2025-08-29 14:31:21.446590 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:21.446610 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:21.446622 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:21.446633 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:21.446644 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:21.446655 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:21.446665 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:21.446676 | orchestrator | 2025-08-29 14:31:21.446688 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-08-29 14:31:21.446700 | orchestrator | Friday 29 August 2025 14:29:03 +0000 (0:00:01.224) 0:01:03.386 ********* 2025-08-29 14:31:21.446733 | orchestrator | changed: [testbed-manager] 2025-08-29 14:31:21.446746 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:21.446756 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:21.446767 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:21.446778 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:21.446789 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:21.446800 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:21.446811 | orchestrator | 2025-08-29 14:31:21.446822 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-08-29 14:31:21.446833 | orchestrator | Friday 29 August 2025 14:29:04 +0000 (0:00:01.530) 0:01:04.917 ********* 2025-08-29 14:31:21.446844 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:21.446855 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:21.446867 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:21.446877 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:21.446888 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:21.446899 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:21.446910 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:21.446921 | orchestrator | 2025-08-29 14:31:21.446932 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-08-29 14:31:21.446943 | orchestrator | Friday 29 August 2025 14:29:06 +0000 (0:00:02.141) 0:01:07.059 ********* 2025-08-29 14:31:21.446954 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:21.446965 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:21.446975 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:21.446986 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:21.446996 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:21.447007 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:21.447018 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:21.447028 | orchestrator | 2025-08-29 14:31:21.447040 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-08-29 14:31:21.447051 | orchestrator | Friday 29 August 2025 14:29:45 +0000 (0:00:39.030) 0:01:46.089 ********* 2025-08-29 14:31:21.447062 | orchestrator | changed: [testbed-manager] 2025-08-29 14:31:21.447072 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:21.447083 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:21.447094 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:21.447105 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:21.447115 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:21.447155 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:21.447168 | orchestrator | 2025-08-29 14:31:21.447179 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-08-29 14:31:21.447190 | orchestrator | Friday 29 August 2025 14:31:01 +0000 (0:01:15.397) 0:03:01.487 ********* 2025-08-29 14:31:21.447200 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:21.447211 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:21.447222 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:21.447233 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:21.447244 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:21.447255 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:21.447265 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:21.447276 | orchestrator | 2025-08-29 14:31:21.447287 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-08-29 14:31:21.447299 | orchestrator | Friday 29 August 2025 14:31:02 +0000 (0:00:01.591) 0:03:03.078 ********* 2025-08-29 14:31:21.447310 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:21.447321 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:21.447332 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:21.447342 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:21.447353 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:21.447363 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:21.447374 | orchestrator | changed: [testbed-manager] 2025-08-29 14:31:21.447385 | orchestrator | 2025-08-29 14:31:21.447396 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-08-29 14:31:21.447407 | orchestrator | Friday 29 August 2025 14:31:15 +0000 (0:00:12.892) 0:03:15.971 ********* 2025-08-29 14:31:21.447437 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-08-29 14:31:21.447465 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-08-29 14:31:21.447507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-08-29 14:31:21.447527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-08-29 14:31:21.447539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-08-29 14:31:21.447550 | orchestrator | 2025-08-29 14:31:21.447561 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-08-29 14:31:21.447572 | orchestrator | Friday 29 August 2025 14:31:16 +0000 (0:00:00.423) 0:03:16.395 ********* 2025-08-29 14:31:21.447593 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:31:21.447603 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:21.447614 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:31:21.447625 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:31:21.447636 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:31:21.447646 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:31:21.447657 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:31:21.447668 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:31:21.447678 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 14:31:21.447689 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 14:31:21.447700 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 14:31:21.447728 | orchestrator | 2025-08-29 14:31:21.447740 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-08-29 14:31:21.447750 | orchestrator | Friday 29 August 2025 14:31:16 +0000 (0:00:00.655) 0:03:17.051 ********* 2025-08-29 14:31:21.447761 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:31:21.447773 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:31:21.447784 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:31:21.447794 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:31:21.447805 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:31:21.447815 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:31:21.447826 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:31:21.447837 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:31:21.447847 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:31:21.447858 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:31:21.447869 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:21.447879 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:31:21.447890 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:31:21.447900 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:31:21.447911 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:31:21.447922 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:31:21.447932 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:31:21.447943 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:31:21.447954 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:31:21.447965 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:31:21.447976 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:31:21.447987 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:31:21.448011 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:31:23.740772 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:31:23.740918 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:31:23.740934 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:31:23.740947 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:31:23.740958 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:31:23.740969 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:31:23.740980 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:31:23.740991 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:31:23.741001 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:31:23.741012 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:31:23.741023 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:31:23.741035 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:31:23.741046 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:31:23.741057 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:31:23.741067 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:31:23.741078 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:31:23.741088 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:31:23.741099 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:31:23.741109 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:31:23.741120 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:31:23.741131 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:31:23.741142 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 14:31:23.741152 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 14:31:23.741164 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 14:31:23.741174 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 14:31:23.741185 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 14:31:23.741195 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 14:31:23.741206 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 14:31:23.741218 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 14:31:23.741230 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 14:31:23.741242 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 14:31:23.741254 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 14:31:23.741296 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 14:31:23.741309 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 14:31:23.741342 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 14:31:23.741354 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 14:31:23.741371 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 14:31:23.741383 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 14:31:23.741395 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 14:31:23.741407 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 14:31:23.741419 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 14:31:23.741431 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 14:31:23.741463 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 14:31:23.741476 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 14:31:23.741489 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 14:31:23.741500 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 14:31:23.741513 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 14:31:23.741525 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 14:31:23.741537 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 14:31:23.741549 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 14:31:23.741561 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 14:31:23.741573 | orchestrator | 2025-08-29 14:31:23.741585 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-08-29 14:31:23.741596 | orchestrator | Friday 29 August 2025 14:31:21 +0000 (0:00:04.672) 0:03:21.723 ********* 2025-08-29 14:31:23.741607 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:31:23.741617 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:31:23.741628 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:31:23.741638 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:31:23.741649 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:31:23.741660 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:31:23.741670 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:31:23.741681 | orchestrator | 2025-08-29 14:31:23.741692 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-08-29 14:31:23.741702 | orchestrator | Friday 29 August 2025 14:31:22 +0000 (0:00:00.671) 0:03:22.394 ********* 2025-08-29 14:31:23.741733 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:31:23.741744 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:23.741754 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:31:23.741773 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:31:23.741784 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:31:23.741795 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:31:23.741805 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:31:23.741816 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:31:23.741827 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 14:31:23.741838 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 14:31:23.741848 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 14:31:23.741860 | orchestrator | 2025-08-29 14:31:23.741871 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-08-29 14:31:23.741881 | orchestrator | Friday 29 August 2025 14:31:22 +0000 (0:00:00.590) 0:03:22.985 ********* 2025-08-29 14:31:23.741892 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:31:23.741903 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:31:23.741914 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:23.741924 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:31:23.741935 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:31:23.741946 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:31:23.741956 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:31:23.741967 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:31:23.741983 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 14:31:23.741994 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 14:31:23.742005 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 14:31:23.742073 | orchestrator | 2025-08-29 14:31:23.742087 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-08-29 14:31:23.742098 | orchestrator | Friday 29 August 2025 14:31:23 +0000 (0:00:00.719) 0:03:23.705 ********* 2025-08-29 14:31:23.742109 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:23.742120 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:31:23.742130 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:31:23.742141 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:31:23.742152 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:31:23.742163 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:31:23.742181 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:31:35.838333 | orchestrator | 2025-08-29 14:31:35.838448 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-08-29 14:31:35.838462 | orchestrator | Friday 29 August 2025 14:31:23 +0000 (0:00:00.318) 0:03:24.024 ********* 2025-08-29 14:31:35.838472 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:35.838483 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:35.838492 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:35.838501 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:35.838509 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:35.838519 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:35.838528 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:35.838536 | orchestrator | 2025-08-29 14:31:35.838545 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-08-29 14:31:35.838554 | orchestrator | Friday 29 August 2025 14:31:29 +0000 (0:00:05.864) 0:03:29.889 ********* 2025-08-29 14:31:35.838563 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-08-29 14:31:35.838572 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:35.838604 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-08-29 14:31:35.838614 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-08-29 14:31:35.838622 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:31:35.838631 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-08-29 14:31:35.838639 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:31:35.838648 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-08-29 14:31:35.838656 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:31:35.838665 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:31:35.838673 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-08-29 14:31:35.838681 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:31:35.838690 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-08-29 14:31:35.838741 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:31:35.838753 | orchestrator | 2025-08-29 14:31:35.838766 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-08-29 14:31:35.838775 | orchestrator | Friday 29 August 2025 14:31:29 +0000 (0:00:00.357) 0:03:30.246 ********* 2025-08-29 14:31:35.838784 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-08-29 14:31:35.838792 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-08-29 14:31:35.838801 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-08-29 14:31:35.838809 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-08-29 14:31:35.838818 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-08-29 14:31:35.838826 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-08-29 14:31:35.838835 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-08-29 14:31:35.838843 | orchestrator | 2025-08-29 14:31:35.838852 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-08-29 14:31:35.838861 | orchestrator | Friday 29 August 2025 14:31:31 +0000 (0:00:01.079) 0:03:31.326 ********* 2025-08-29 14:31:35.838872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:31:35.838885 | orchestrator | 2025-08-29 14:31:35.838895 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-08-29 14:31:35.838905 | orchestrator | Friday 29 August 2025 14:31:31 +0000 (0:00:00.548) 0:03:31.874 ********* 2025-08-29 14:31:35.838915 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:35.838924 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:35.838933 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:35.838942 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:35.838952 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:35.838961 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:35.838970 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:35.838979 | orchestrator | 2025-08-29 14:31:35.838989 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-08-29 14:31:35.838999 | orchestrator | Friday 29 August 2025 14:31:32 +0000 (0:00:01.249) 0:03:33.123 ********* 2025-08-29 14:31:35.839008 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:35.839018 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:35.839027 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:35.839036 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:35.839046 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:35.839055 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:35.839065 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:35.839074 | orchestrator | 2025-08-29 14:31:35.839084 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-08-29 14:31:35.839095 | orchestrator | Friday 29 August 2025 14:31:33 +0000 (0:00:00.639) 0:03:33.763 ********* 2025-08-29 14:31:35.839111 | orchestrator | changed: [testbed-manager] 2025-08-29 14:31:35.839126 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:35.839143 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:35.839160 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:35.839188 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:35.839204 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:35.839214 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:35.839223 | orchestrator | 2025-08-29 14:31:35.839234 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-08-29 14:31:35.839258 | orchestrator | Friday 29 August 2025 14:31:34 +0000 (0:00:00.765) 0:03:34.529 ********* 2025-08-29 14:31:35.839267 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:35.839275 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:35.839284 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:35.839292 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:35.839301 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:35.839309 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:35.839317 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:35.839326 | orchestrator | 2025-08-29 14:31:35.839334 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-08-29 14:31:35.839343 | orchestrator | Friday 29 August 2025 14:31:34 +0000 (0:00:00.627) 0:03:35.156 ********* 2025-08-29 14:31:35.839371 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476534.8974407, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:35.839383 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476559.6136158, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:35.839393 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476591.8262537, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:35.839402 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476560.032728, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:35.839411 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476556.0112016, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:35.839427 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476587.7468781, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:35.839436 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476555.8563092, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:35.839452 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:51.813159 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:51.813249 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:51.813273 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:51.813280 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:51.813303 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:51.813313 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:31:51.813320 | orchestrator | 2025-08-29 14:31:51.813328 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-08-29 14:31:51.813336 | orchestrator | Friday 29 August 2025 14:31:35 +0000 (0:00:00.958) 0:03:36.114 ********* 2025-08-29 14:31:51.813343 | orchestrator | changed: [testbed-manager] 2025-08-29 14:31:51.813351 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:51.813357 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:51.813363 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:51.813369 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:51.813376 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:51.813382 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:51.813388 | orchestrator | 2025-08-29 14:31:51.813395 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-08-29 14:31:51.813401 | orchestrator | Friday 29 August 2025 14:31:36 +0000 (0:00:01.137) 0:03:37.252 ********* 2025-08-29 14:31:51.813407 | orchestrator | changed: [testbed-manager] 2025-08-29 14:31:51.813414 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:51.813421 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:51.813428 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:51.813446 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:51.813452 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:51.813459 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:51.813465 | orchestrator | 2025-08-29 14:31:51.813471 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-08-29 14:31:51.813477 | orchestrator | Friday 29 August 2025 14:31:38 +0000 (0:00:01.165) 0:03:38.418 ********* 2025-08-29 14:31:51.813483 | orchestrator | changed: [testbed-manager] 2025-08-29 14:31:51.813489 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:51.813495 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:51.813501 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:51.813507 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:51.813513 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:51.813519 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:51.813525 | orchestrator | 2025-08-29 14:31:51.813531 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-08-29 14:31:51.813537 | orchestrator | Friday 29 August 2025 14:31:39 +0000 (0:00:01.162) 0:03:39.580 ********* 2025-08-29 14:31:51.813544 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:51.813551 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:31:51.813558 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:31:51.813564 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:31:51.813571 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:31:51.813577 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:31:51.813584 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:31:51.813590 | orchestrator | 2025-08-29 14:31:51.813601 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-08-29 14:31:51.813608 | orchestrator | Friday 29 August 2025 14:31:39 +0000 (0:00:00.306) 0:03:39.887 ********* 2025-08-29 14:31:51.813614 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:51.813622 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:51.813628 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:51.813635 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:51.813641 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:51.813647 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:51.813654 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:51.813661 | orchestrator | 2025-08-29 14:31:51.813668 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-08-29 14:31:51.813674 | orchestrator | Friday 29 August 2025 14:31:40 +0000 (0:00:00.743) 0:03:40.631 ********* 2025-08-29 14:31:51.813682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:31:51.813729 | orchestrator | 2025-08-29 14:31:51.813738 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-08-29 14:31:51.813744 | orchestrator | Friday 29 August 2025 14:31:40 +0000 (0:00:00.402) 0:03:41.033 ********* 2025-08-29 14:31:51.813751 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:51.813757 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:51.813764 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:51.813771 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:51.813777 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:51.813783 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:51.813790 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:51.813797 | orchestrator | 2025-08-29 14:31:51.813803 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-08-29 14:31:51.813810 | orchestrator | Friday 29 August 2025 14:31:48 +0000 (0:00:07.767) 0:03:48.801 ********* 2025-08-29 14:31:51.813817 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:51.813824 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:51.813830 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:51.813837 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:51.813843 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:51.813850 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:51.813857 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:51.813863 | orchestrator | 2025-08-29 14:31:51.813869 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-08-29 14:31:51.813876 | orchestrator | Friday 29 August 2025 14:31:49 +0000 (0:00:01.187) 0:03:49.989 ********* 2025-08-29 14:31:51.813882 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:51.813888 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:51.813894 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:51.813900 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:51.813906 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:51.813913 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:51.813919 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:51.813926 | orchestrator | 2025-08-29 14:31:51.813934 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-08-29 14:31:51.813943 | orchestrator | Friday 29 August 2025 14:31:50 +0000 (0:00:00.984) 0:03:50.973 ********* 2025-08-29 14:31:51.813950 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:51.813958 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:51.813965 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:51.813972 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:51.813978 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:51.813989 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:51.813996 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:51.814004 | orchestrator | 2025-08-29 14:31:51.814044 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-08-29 14:31:51.814055 | orchestrator | Friday 29 August 2025 14:31:51 +0000 (0:00:00.497) 0:03:51.470 ********* 2025-08-29 14:31:51.814072 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:51.814080 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:51.814088 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:51.814095 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:51.814102 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:51.814109 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:51.814116 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:51.814123 | orchestrator | 2025-08-29 14:31:51.814131 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-08-29 14:31:51.814138 | orchestrator | Friday 29 August 2025 14:31:51 +0000 (0:00:00.310) 0:03:51.781 ********* 2025-08-29 14:31:51.814145 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:51.814152 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:51.814160 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:51.814166 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:51.814173 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:51.814186 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:03.353999 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:03.354141 | orchestrator | 2025-08-29 14:33:03.354150 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-08-29 14:33:03.354159 | orchestrator | Friday 29 August 2025 14:31:51 +0000 (0:00:00.316) 0:03:52.098 ********* 2025-08-29 14:33:03.354166 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:03.354171 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:03.354177 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:03.354183 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:03.354189 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:03.354195 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:03.354201 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:03.354206 | orchestrator | 2025-08-29 14:33:03.354212 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-08-29 14:33:03.354218 | orchestrator | Friday 29 August 2025 14:31:57 +0000 (0:00:05.445) 0:03:57.543 ********* 2025-08-29 14:33:03.354225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:33:03.354233 | orchestrator | 2025-08-29 14:33:03.354239 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-08-29 14:33:03.354245 | orchestrator | Friday 29 August 2025 14:31:57 +0000 (0:00:00.391) 0:03:57.935 ********* 2025-08-29 14:33:03.354252 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-08-29 14:33:03.354259 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-08-29 14:33:03.354265 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-08-29 14:33:03.354271 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-08-29 14:33:03.354276 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:33:03.354282 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-08-29 14:33:03.354288 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-08-29 14:33:03.354293 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:33:03.354298 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-08-29 14:33:03.354304 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-08-29 14:33:03.354309 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:33:03.354314 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-08-29 14:33:03.354320 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-08-29 14:33:03.354325 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:33:03.354331 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:33:03.354336 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-08-29 14:33:03.354341 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-08-29 14:33:03.354369 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:33:03.354375 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-08-29 14:33:03.354380 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-08-29 14:33:03.354386 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:33:03.354392 | orchestrator | 2025-08-29 14:33:03.354398 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-08-29 14:33:03.354404 | orchestrator | Friday 29 August 2025 14:31:58 +0000 (0:00:00.361) 0:03:58.296 ********* 2025-08-29 14:33:03.354411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:33:03.354417 | orchestrator | 2025-08-29 14:33:03.354422 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-08-29 14:33:03.354427 | orchestrator | Friday 29 August 2025 14:31:58 +0000 (0:00:00.392) 0:03:58.689 ********* 2025-08-29 14:33:03.354433 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-08-29 14:33:03.354438 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:33:03.354443 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-08-29 14:33:03.354449 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:33:03.354455 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-08-29 14:33:03.354461 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:33:03.354467 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-08-29 14:33:03.354475 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:33:03.354482 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-08-29 14:33:03.354488 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:33:03.354510 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-08-29 14:33:03.354518 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:33:03.354524 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-08-29 14:33:03.354531 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:33:03.354540 | orchestrator | 2025-08-29 14:33:03.354549 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-08-29 14:33:03.354556 | orchestrator | Friday 29 August 2025 14:31:58 +0000 (0:00:00.340) 0:03:59.029 ********* 2025-08-29 14:33:03.354563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:33:03.354570 | orchestrator | 2025-08-29 14:33:03.354576 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-08-29 14:33:03.354584 | orchestrator | Friday 29 August 2025 14:31:59 +0000 (0:00:00.440) 0:03:59.470 ********* 2025-08-29 14:33:03.354591 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:03.354614 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:03.354621 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:03.354628 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:33:03.354636 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:33:03.354643 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:33:03.354672 | orchestrator | changed: [testbed-manager] 2025-08-29 14:33:03.354679 | orchestrator | 2025-08-29 14:33:03.354684 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-08-29 14:33:03.354691 | orchestrator | Friday 29 August 2025 14:32:34 +0000 (0:00:35.136) 0:04:34.607 ********* 2025-08-29 14:33:03.354697 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:03.354703 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:03.354709 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:03.354716 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:33:03.354722 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:33:03.354728 | orchestrator | changed: [testbed-manager] 2025-08-29 14:33:03.354741 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:33:03.354748 | orchestrator | 2025-08-29 14:33:03.354754 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-08-29 14:33:03.354762 | orchestrator | Friday 29 August 2025 14:32:42 +0000 (0:00:07.701) 0:04:42.308 ********* 2025-08-29 14:33:03.354767 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:03.354771 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:03.354776 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:33:03.354780 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:03.354784 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:33:03.354789 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:33:03.354793 | orchestrator | changed: [testbed-manager] 2025-08-29 14:33:03.354797 | orchestrator | 2025-08-29 14:33:03.354801 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-08-29 14:33:03.354805 | orchestrator | Friday 29 August 2025 14:32:50 +0000 (0:00:08.428) 0:04:50.737 ********* 2025-08-29 14:33:03.354810 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:03.354814 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:03.354818 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:03.354822 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:03.354826 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:03.354831 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:03.354835 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:03.354839 | orchestrator | 2025-08-29 14:33:03.354843 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-08-29 14:33:03.354848 | orchestrator | Friday 29 August 2025 14:32:52 +0000 (0:00:01.593) 0:04:52.330 ********* 2025-08-29 14:33:03.354852 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:03.354857 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:03.354861 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:33:03.354865 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:03.354869 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:33:03.354873 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:33:03.354878 | orchestrator | changed: [testbed-manager] 2025-08-29 14:33:03.354882 | orchestrator | 2025-08-29 14:33:03.354886 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-08-29 14:33:03.354891 | orchestrator | Friday 29 August 2025 14:32:58 +0000 (0:00:06.689) 0:04:59.020 ********* 2025-08-29 14:33:03.354896 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:33:03.354902 | orchestrator | 2025-08-29 14:33:03.354906 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-08-29 14:33:03.354910 | orchestrator | Friday 29 August 2025 14:32:59 +0000 (0:00:00.728) 0:04:59.749 ********* 2025-08-29 14:33:03.354914 | orchestrator | changed: [testbed-manager] 2025-08-29 14:33:03.354917 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:03.354921 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:03.354925 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:03.354928 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:33:03.354932 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:33:03.354935 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:33:03.354939 | orchestrator | 2025-08-29 14:33:03.354943 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-08-29 14:33:03.354946 | orchestrator | Friday 29 August 2025 14:33:00 +0000 (0:00:00.817) 0:05:00.566 ********* 2025-08-29 14:33:03.354950 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:03.354954 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:03.354958 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:03.354961 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:03.354965 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:03.354969 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:03.354972 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:03.354981 | orchestrator | 2025-08-29 14:33:03.354985 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-08-29 14:33:03.354988 | orchestrator | Friday 29 August 2025 14:33:02 +0000 (0:00:01.846) 0:05:02.412 ********* 2025-08-29 14:33:03.354992 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:03.354996 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:03.355000 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:03.355004 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:33:03.355008 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:33:03.355011 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:33:03.355015 | orchestrator | changed: [testbed-manager] 2025-08-29 14:33:03.355019 | orchestrator | 2025-08-29 14:33:03.355022 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-08-29 14:33:03.355026 | orchestrator | Friday 29 August 2025 14:33:02 +0000 (0:00:00.864) 0:05:03.277 ********* 2025-08-29 14:33:03.355030 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:33:03.355033 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:33:03.355037 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:33:03.355041 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:33:03.355044 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:33:03.355048 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:33:03.355052 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:33:03.355056 | orchestrator | 2025-08-29 14:33:03.355059 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-08-29 14:33:03.355067 | orchestrator | Friday 29 August 2025 14:33:03 +0000 (0:00:00.359) 0:05:03.637 ********* 2025-08-29 14:33:30.553361 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:33:30.553515 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:33:30.553546 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:33:30.553566 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:33:30.553585 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:33:30.553603 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:33:30.553621 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:33:30.553641 | orchestrator | 2025-08-29 14:33:30.553698 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-08-29 14:33:30.553719 | orchestrator | Friday 29 August 2025 14:33:03 +0000 (0:00:00.498) 0:05:04.136 ********* 2025-08-29 14:33:30.553731 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:30.553744 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:30.553754 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:30.553765 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:30.553775 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:30.553786 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:30.553797 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:30.553817 | orchestrator | 2025-08-29 14:33:30.553837 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-08-29 14:33:30.553857 | orchestrator | Friday 29 August 2025 14:33:04 +0000 (0:00:00.345) 0:05:04.481 ********* 2025-08-29 14:33:30.553873 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:33:30.553886 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:33:30.553898 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:33:30.553910 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:33:30.553922 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:33:30.553942 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:33:30.553962 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:33:30.553981 | orchestrator | 2025-08-29 14:33:30.553997 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-08-29 14:33:30.554097 | orchestrator | Friday 29 August 2025 14:33:04 +0000 (0:00:00.362) 0:05:04.844 ********* 2025-08-29 14:33:30.554129 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:30.554149 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:30.554169 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:30.554187 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:30.554242 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:30.554261 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:30.554277 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:30.554294 | orchestrator | 2025-08-29 14:33:30.554314 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-08-29 14:33:30.554355 | orchestrator | Friday 29 August 2025 14:33:04 +0000 (0:00:00.337) 0:05:05.181 ********* 2025-08-29 14:33:30.554376 | orchestrator | ok: [testbed-manager] =>  2025-08-29 14:33:30.554391 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:33:30.554409 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 14:33:30.554426 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:33:30.554444 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 14:33:30.554455 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:33:30.554465 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 14:33:30.554476 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:33:30.554487 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 14:33:30.554497 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:33:30.554513 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 14:33:30.554532 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:33:30.554551 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 14:33:30.554565 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:33:30.554580 | orchestrator | 2025-08-29 14:33:30.554597 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-08-29 14:33:30.554624 | orchestrator | Friday 29 August 2025 14:33:05 +0000 (0:00:00.393) 0:05:05.574 ********* 2025-08-29 14:33:30.554673 | orchestrator | ok: [testbed-manager] =>  2025-08-29 14:33:30.554696 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:33:30.554715 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 14:33:30.554735 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:33:30.554754 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 14:33:30.554774 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:33:30.554794 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 14:33:30.554815 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:33:30.554836 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 14:33:30.554858 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:33:30.554870 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 14:33:30.554881 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:33:30.554891 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 14:33:30.554902 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:33:30.554912 | orchestrator | 2025-08-29 14:33:30.554923 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-08-29 14:33:30.554933 | orchestrator | Friday 29 August 2025 14:33:05 +0000 (0:00:00.335) 0:05:05.910 ********* 2025-08-29 14:33:30.554944 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:33:30.554954 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:33:30.554965 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:33:30.554975 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:33:30.554986 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:33:30.554996 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:33:30.555015 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:33:30.555026 | orchestrator | 2025-08-29 14:33:30.555037 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-08-29 14:33:30.555047 | orchestrator | Friday 29 August 2025 14:33:05 +0000 (0:00:00.354) 0:05:06.264 ********* 2025-08-29 14:33:30.555057 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:33:30.555068 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:33:30.555078 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:33:30.555090 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:33:30.555109 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:33:30.555127 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:33:30.555146 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:33:30.555164 | orchestrator | 2025-08-29 14:33:30.555176 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-08-29 14:33:30.555198 | orchestrator | Friday 29 August 2025 14:33:06 +0000 (0:00:00.376) 0:05:06.641 ********* 2025-08-29 14:33:30.555234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:33:30.555252 | orchestrator | 2025-08-29 14:33:30.555272 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-08-29 14:33:30.555291 | orchestrator | Friday 29 August 2025 14:33:06 +0000 (0:00:00.513) 0:05:07.154 ********* 2025-08-29 14:33:30.555309 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:30.555326 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:30.555343 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:30.555363 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:30.555382 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:30.555400 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:30.555418 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:30.555437 | orchestrator | 2025-08-29 14:33:30.555456 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-08-29 14:33:30.555474 | orchestrator | Friday 29 August 2025 14:33:07 +0000 (0:00:01.102) 0:05:08.257 ********* 2025-08-29 14:33:30.555485 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:30.555496 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:30.555507 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:30.555517 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:30.555528 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:30.555538 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:30.555549 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:30.555559 | orchestrator | 2025-08-29 14:33:30.555570 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-08-29 14:33:30.555582 | orchestrator | Friday 29 August 2025 14:33:11 +0000 (0:00:03.462) 0:05:11.719 ********* 2025-08-29 14:33:30.555598 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-08-29 14:33:30.555618 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-08-29 14:33:30.555637 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-08-29 14:33:30.555806 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-08-29 14:33:30.555830 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-08-29 14:33:30.555841 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-08-29 14:33:30.555911 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:33:30.555923 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-08-29 14:33:30.555934 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-08-29 14:33:30.555944 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-08-29 14:33:30.555955 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:33:30.555966 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-08-29 14:33:30.555976 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-08-29 14:33:30.555987 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:33:30.556034 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-08-29 14:33:30.556046 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-08-29 14:33:30.556056 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-08-29 14:33:30.556067 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-08-29 14:33:30.556116 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:33:30.556130 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-08-29 14:33:30.556141 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-08-29 14:33:30.556151 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-08-29 14:33:30.556162 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:33:30.556173 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:33:30.556184 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-08-29 14:33:30.556239 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-08-29 14:33:30.556250 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-08-29 14:33:30.556283 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:33:30.556295 | orchestrator | 2025-08-29 14:33:30.556304 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-08-29 14:33:30.556314 | orchestrator | Friday 29 August 2025 14:33:12 +0000 (0:00:00.724) 0:05:12.444 ********* 2025-08-29 14:33:30.556324 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:30.556334 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:30.556343 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:30.556352 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:30.556395 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:33:30.556406 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:33:30.556415 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:33:30.556425 | orchestrator | 2025-08-29 14:33:30.556434 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-08-29 14:33:30.556444 | orchestrator | Friday 29 August 2025 14:33:18 +0000 (0:00:05.914) 0:05:18.359 ********* 2025-08-29 14:33:30.556453 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:30.556463 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:30.556472 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:30.556481 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:30.556498 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:33:30.556508 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:33:30.556517 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:33:30.556527 | orchestrator | 2025-08-29 14:33:30.556536 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-08-29 14:33:30.556546 | orchestrator | Friday 29 August 2025 14:33:19 +0000 (0:00:01.415) 0:05:19.774 ********* 2025-08-29 14:33:30.556555 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:30.556565 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:30.556574 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:33:30.556583 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:30.556592 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:33:30.556602 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:30.556611 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:33:30.556621 | orchestrator | 2025-08-29 14:33:30.556630 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-08-29 14:33:30.556640 | orchestrator | Friday 29 August 2025 14:33:27 +0000 (0:00:07.703) 0:05:27.477 ********* 2025-08-29 14:33:30.556720 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:30.556731 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:30.556740 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:30.556764 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:14.542767 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:14.542904 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:14.542930 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:14.542949 | orchestrator | 2025-08-29 14:34:14.542971 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-08-29 14:34:14.542992 | orchestrator | Friday 29 August 2025 14:33:30 +0000 (0:00:03.341) 0:05:30.819 ********* 2025-08-29 14:34:14.543011 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:14.543024 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:14.543035 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:14.543046 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:14.543057 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:14.543067 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:14.543078 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:14.543089 | orchestrator | 2025-08-29 14:34:14.543100 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-08-29 14:34:14.543111 | orchestrator | Friday 29 August 2025 14:33:32 +0000 (0:00:01.583) 0:05:32.403 ********* 2025-08-29 14:34:14.543148 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:14.543159 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:14.543170 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:14.543180 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:14.543190 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:14.543201 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:14.543211 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:14.543222 | orchestrator | 2025-08-29 14:34:14.543232 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-08-29 14:34:14.543244 | orchestrator | Friday 29 August 2025 14:33:33 +0000 (0:00:01.589) 0:05:33.992 ********* 2025-08-29 14:34:14.543256 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:14.543267 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:14.543279 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:14.543290 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:14.543302 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:14.543314 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:14.543325 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:14.543336 | orchestrator | 2025-08-29 14:34:14.543348 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-08-29 14:34:14.543360 | orchestrator | Friday 29 August 2025 14:33:34 +0000 (0:00:00.644) 0:05:34.636 ********* 2025-08-29 14:34:14.543372 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:14.543384 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:14.543395 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:14.543407 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:14.543418 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:14.543431 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:14.543443 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:14.543455 | orchestrator | 2025-08-29 14:34:14.543467 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-08-29 14:34:14.543479 | orchestrator | Friday 29 August 2025 14:33:43 +0000 (0:00:09.551) 0:05:44.188 ********* 2025-08-29 14:34:14.543490 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:14.543502 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:14.543513 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:14.543525 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:14.543537 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:14.543548 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:14.543560 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:14.543572 | orchestrator | 2025-08-29 14:34:14.543584 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-08-29 14:34:14.543596 | orchestrator | Friday 29 August 2025 14:33:44 +0000 (0:00:00.944) 0:05:45.132 ********* 2025-08-29 14:34:14.543606 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:14.543617 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:14.543673 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:14.543694 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:14.543705 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:14.543718 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:14.543736 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:14.543754 | orchestrator | 2025-08-29 14:34:14.543772 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-08-29 14:34:14.543791 | orchestrator | Friday 29 August 2025 14:33:53 +0000 (0:00:08.602) 0:05:53.735 ********* 2025-08-29 14:34:14.543811 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:14.543829 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:14.543848 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:14.543867 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:14.543887 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:14.543904 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:14.543923 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:14.543942 | orchestrator | 2025-08-29 14:34:14.543954 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-08-29 14:34:14.543978 | orchestrator | Friday 29 August 2025 14:34:04 +0000 (0:00:10.943) 0:06:04.679 ********* 2025-08-29 14:34:14.544015 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-08-29 14:34:14.544037 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-08-29 14:34:14.544055 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-08-29 14:34:14.544074 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-08-29 14:34:14.544085 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-08-29 14:34:14.544096 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-08-29 14:34:14.544106 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-08-29 14:34:14.544116 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-08-29 14:34:14.544126 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-08-29 14:34:14.544137 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-08-29 14:34:14.544153 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-08-29 14:34:14.544171 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-08-29 14:34:14.544190 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-08-29 14:34:14.544208 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-08-29 14:34:14.544226 | orchestrator | 2025-08-29 14:34:14.544245 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-08-29 14:34:14.544287 | orchestrator | Friday 29 August 2025 14:34:05 +0000 (0:00:01.364) 0:06:06.044 ********* 2025-08-29 14:34:14.544306 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:14.544324 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:14.544343 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:14.544360 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:14.544376 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:14.544386 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:14.544397 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:14.544407 | orchestrator | 2025-08-29 14:34:14.544418 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-08-29 14:34:14.544428 | orchestrator | Friday 29 August 2025 14:34:06 +0000 (0:00:00.574) 0:06:06.618 ********* 2025-08-29 14:34:14.544439 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:14.544449 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:14.544460 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:14.544470 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:14.544480 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:14.544491 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:14.544501 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:14.544511 | orchestrator | 2025-08-29 14:34:14.544522 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-08-29 14:34:14.544533 | orchestrator | Friday 29 August 2025 14:34:09 +0000 (0:00:03.663) 0:06:10.282 ********* 2025-08-29 14:34:14.544544 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:14.544554 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:14.544565 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:14.544575 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:14.544585 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:14.544596 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:14.544606 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:14.544616 | orchestrator | 2025-08-29 14:34:14.544658 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-08-29 14:34:14.544671 | orchestrator | Friday 29 August 2025 14:34:10 +0000 (0:00:00.545) 0:06:10.827 ********* 2025-08-29 14:34:14.544681 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-08-29 14:34:14.544692 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-08-29 14:34:14.544703 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:14.544723 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-08-29 14:34:14.544738 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-08-29 14:34:14.544757 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:14.544775 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-08-29 14:34:14.544792 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-08-29 14:34:14.544811 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:14.544829 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-08-29 14:34:14.544840 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-08-29 14:34:14.544851 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:14.544861 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-08-29 14:34:14.544875 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-08-29 14:34:14.544893 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:14.544911 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-08-29 14:34:14.544931 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-08-29 14:34:14.544950 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:14.544967 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-08-29 14:34:14.544983 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-08-29 14:34:14.544993 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:14.545004 | orchestrator | 2025-08-29 14:34:14.545015 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-08-29 14:34:14.545025 | orchestrator | Friday 29 August 2025 14:34:11 +0000 (0:00:00.805) 0:06:11.632 ********* 2025-08-29 14:34:14.545036 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:14.545046 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:14.545057 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:14.545067 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:14.545078 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:14.545088 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:14.545099 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:14.545109 | orchestrator | 2025-08-29 14:34:14.545119 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-08-29 14:34:14.545130 | orchestrator | Friday 29 August 2025 14:34:11 +0000 (0:00:00.553) 0:06:12.186 ********* 2025-08-29 14:34:14.545140 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:14.545158 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:14.545169 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:14.545179 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:14.545189 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:14.545200 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:14.545210 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:14.545220 | orchestrator | 2025-08-29 14:34:14.545231 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-08-29 14:34:14.545241 | orchestrator | Friday 29 August 2025 14:34:12 +0000 (0:00:00.544) 0:06:12.731 ********* 2025-08-29 14:34:14.545252 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:14.545262 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:14.545273 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:14.545283 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:14.545293 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:14.545304 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:14.545314 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:14.545326 | orchestrator | 2025-08-29 14:34:14.545345 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-08-29 14:34:14.545363 | orchestrator | Friday 29 August 2025 14:34:12 +0000 (0:00:00.503) 0:06:13.234 ********* 2025-08-29 14:34:14.545381 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:14.545411 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:38.407267 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:38.407391 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:38.407398 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:38.407402 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:38.407406 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:38.407410 | orchestrator | 2025-08-29 14:34:38.407415 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-08-29 14:34:38.407421 | orchestrator | Friday 29 August 2025 14:34:14 +0000 (0:00:01.590) 0:06:14.824 ********* 2025-08-29 14:34:38.407425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:34:38.407467 | orchestrator | 2025-08-29 14:34:38.407472 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-08-29 14:34:38.407477 | orchestrator | Friday 29 August 2025 14:34:15 +0000 (0:00:01.161) 0:06:15.985 ********* 2025-08-29 14:34:38.407481 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:38.407485 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:38.407490 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:38.407494 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:38.407498 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:38.407502 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:38.407505 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:38.407509 | orchestrator | 2025-08-29 14:34:38.407513 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-08-29 14:34:38.407517 | orchestrator | Friday 29 August 2025 14:34:16 +0000 (0:00:00.799) 0:06:16.784 ********* 2025-08-29 14:34:38.407520 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:38.407524 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:38.407528 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:38.407531 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:38.407535 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:38.407539 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:38.407542 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:38.407546 | orchestrator | 2025-08-29 14:34:38.407550 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-08-29 14:34:38.407554 | orchestrator | Friday 29 August 2025 14:34:17 +0000 (0:00:00.912) 0:06:17.697 ********* 2025-08-29 14:34:38.407558 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:38.407561 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:38.407565 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:38.407578 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:38.407582 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:38.407585 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:38.407589 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:38.407599 | orchestrator | 2025-08-29 14:34:38.407603 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-08-29 14:34:38.407607 | orchestrator | Friday 29 August 2025 14:34:19 +0000 (0:00:01.610) 0:06:19.307 ********* 2025-08-29 14:34:38.407643 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:38.407648 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:38.407652 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:38.407655 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:38.407659 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:38.407663 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:38.407667 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:38.407671 | orchestrator | 2025-08-29 14:34:38.407674 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-08-29 14:34:38.407678 | orchestrator | Friday 29 August 2025 14:34:20 +0000 (0:00:01.398) 0:06:20.706 ********* 2025-08-29 14:34:38.407682 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:38.407686 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:38.407690 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:38.407694 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:38.407705 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:38.407709 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:38.407713 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:38.407717 | orchestrator | 2025-08-29 14:34:38.407721 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-08-29 14:34:38.407724 | orchestrator | Friday 29 August 2025 14:34:21 +0000 (0:00:01.263) 0:06:21.970 ********* 2025-08-29 14:34:38.407728 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:38.407732 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:38.407736 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:38.407740 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:38.407743 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:38.407747 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:38.407751 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:38.407755 | orchestrator | 2025-08-29 14:34:38.407759 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-08-29 14:34:38.407763 | orchestrator | Friday 29 August 2025 14:34:23 +0000 (0:00:01.372) 0:06:23.343 ********* 2025-08-29 14:34:38.407767 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:34:38.407771 | orchestrator | 2025-08-29 14:34:38.407775 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-08-29 14:34:38.407793 | orchestrator | Friday 29 August 2025 14:34:24 +0000 (0:00:01.062) 0:06:24.405 ********* 2025-08-29 14:34:38.407800 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:38.407806 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:38.407812 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:38.407818 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:38.407824 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:38.407830 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:38.407836 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:38.407842 | orchestrator | 2025-08-29 14:34:38.407848 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-08-29 14:34:38.407854 | orchestrator | Friday 29 August 2025 14:34:25 +0000 (0:00:01.338) 0:06:25.743 ********* 2025-08-29 14:34:38.407860 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:38.407865 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:38.407885 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:38.407891 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:38.407896 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:38.407902 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:38.407907 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:38.407913 | orchestrator | 2025-08-29 14:34:38.407919 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-08-29 14:34:38.407924 | orchestrator | Friday 29 August 2025 14:34:27 +0000 (0:00:01.833) 0:06:27.577 ********* 2025-08-29 14:34:38.407930 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:38.407936 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:38.407942 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:38.407948 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:38.407954 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:38.407960 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:38.407966 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:38.407972 | orchestrator | 2025-08-29 14:34:38.407978 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-08-29 14:34:38.407985 | orchestrator | Friday 29 August 2025 14:34:29 +0000 (0:00:01.854) 0:06:29.431 ********* 2025-08-29 14:34:38.407990 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:38.407996 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:38.408002 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:38.408009 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:38.408015 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:38.408022 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:38.408034 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:38.408041 | orchestrator | 2025-08-29 14:34:38.408047 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-08-29 14:34:38.408053 | orchestrator | Friday 29 August 2025 14:34:30 +0000 (0:00:01.081) 0:06:30.512 ********* 2025-08-29 14:34:38.408060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:34:38.408066 | orchestrator | 2025-08-29 14:34:38.408073 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:34:38.408080 | orchestrator | Friday 29 August 2025 14:34:31 +0000 (0:00:01.400) 0:06:31.912 ********* 2025-08-29 14:34:38.408087 | orchestrator | 2025-08-29 14:34:38.408094 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:34:38.408101 | orchestrator | Friday 29 August 2025 14:34:31 +0000 (0:00:00.055) 0:06:31.967 ********* 2025-08-29 14:34:38.408107 | orchestrator | 2025-08-29 14:34:38.408114 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:34:38.408121 | orchestrator | Friday 29 August 2025 14:34:31 +0000 (0:00:00.042) 0:06:32.010 ********* 2025-08-29 14:34:38.408128 | orchestrator | 2025-08-29 14:34:38.408134 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:34:38.408138 | orchestrator | Friday 29 August 2025 14:34:31 +0000 (0:00:00.044) 0:06:32.054 ********* 2025-08-29 14:34:38.408142 | orchestrator | 2025-08-29 14:34:38.408146 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:34:38.408150 | orchestrator | Friday 29 August 2025 14:34:31 +0000 (0:00:00.055) 0:06:32.110 ********* 2025-08-29 14:34:38.408153 | orchestrator | 2025-08-29 14:34:38.408157 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:34:38.408161 | orchestrator | Friday 29 August 2025 14:34:31 +0000 (0:00:00.044) 0:06:32.154 ********* 2025-08-29 14:34:38.408165 | orchestrator | 2025-08-29 14:34:38.408169 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:34:38.408172 | orchestrator | Friday 29 August 2025 14:34:31 +0000 (0:00:00.046) 0:06:32.200 ********* 2025-08-29 14:34:38.408176 | orchestrator | 2025-08-29 14:34:38.408180 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 14:34:38.408183 | orchestrator | Friday 29 August 2025 14:34:31 +0000 (0:00:00.062) 0:06:32.263 ********* 2025-08-29 14:34:38.408187 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:38.408191 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:38.408194 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:38.408198 | orchestrator | 2025-08-29 14:34:38.408202 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-08-29 14:34:38.408205 | orchestrator | Friday 29 August 2025 14:34:33 +0000 (0:00:01.154) 0:06:33.418 ********* 2025-08-29 14:34:38.408209 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:38.408213 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:38.408216 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:38.408220 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:38.408223 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:38.408227 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:38.408231 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:38.408234 | orchestrator | 2025-08-29 14:34:38.408238 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-08-29 14:34:38.408247 | orchestrator | Friday 29 August 2025 14:34:34 +0000 (0:00:01.409) 0:06:34.827 ********* 2025-08-29 14:34:38.408251 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:38.408255 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:38.408258 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:38.408262 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:38.408266 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:38.408269 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:38.408277 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:38.408281 | orchestrator | 2025-08-29 14:34:38.408285 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-08-29 14:34:38.408289 | orchestrator | Friday 29 August 2025 14:34:37 +0000 (0:00:02.674) 0:06:37.502 ********* 2025-08-29 14:34:38.408292 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:38.408296 | orchestrator | 2025-08-29 14:34:38.408300 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-08-29 14:34:38.408303 | orchestrator | Friday 29 August 2025 14:34:37 +0000 (0:00:00.160) 0:06:37.662 ********* 2025-08-29 14:34:38.408307 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:38.408311 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:38.408314 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:38.408318 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:38.408327 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:05.210576 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:05.210703 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:05.210711 | orchestrator | 2025-08-29 14:35:05.210716 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-08-29 14:35:05.210721 | orchestrator | Friday 29 August 2025 14:34:38 +0000 (0:00:01.022) 0:06:38.685 ********* 2025-08-29 14:35:05.210726 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:05.210731 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:05.210735 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:05.210739 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:05.210743 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:05.210746 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:05.210751 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:05.210754 | orchestrator | 2025-08-29 14:35:05.210759 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-08-29 14:35:05.210763 | orchestrator | Friday 29 August 2025 14:34:38 +0000 (0:00:00.607) 0:06:39.292 ********* 2025-08-29 14:35:05.210767 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:35:05.210773 | orchestrator | 2025-08-29 14:35:05.210777 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-08-29 14:35:05.210781 | orchestrator | Friday 29 August 2025 14:34:40 +0000 (0:00:01.343) 0:06:40.636 ********* 2025-08-29 14:35:05.210785 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:05.210790 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:05.210793 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:05.210798 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:05.210802 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:05.210805 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:05.210809 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:05.210813 | orchestrator | 2025-08-29 14:35:05.210817 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-08-29 14:35:05.210820 | orchestrator | Friday 29 August 2025 14:34:41 +0000 (0:00:00.931) 0:06:41.568 ********* 2025-08-29 14:35:05.210824 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-08-29 14:35:05.210828 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-08-29 14:35:05.210832 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-08-29 14:35:05.210836 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-08-29 14:35:05.210840 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-08-29 14:35:05.210843 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-08-29 14:35:05.210847 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-08-29 14:35:05.210851 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-08-29 14:35:05.210855 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-08-29 14:35:05.210872 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-08-29 14:35:05.210876 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-08-29 14:35:05.210880 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-08-29 14:35:05.210884 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-08-29 14:35:05.210888 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-08-29 14:35:05.210892 | orchestrator | 2025-08-29 14:35:05.210895 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-08-29 14:35:05.210899 | orchestrator | Friday 29 August 2025 14:34:43 +0000 (0:00:02.604) 0:06:44.172 ********* 2025-08-29 14:35:05.210903 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:05.210907 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:05.210910 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:05.210914 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:05.210918 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:05.210921 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:05.210925 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:05.210929 | orchestrator | 2025-08-29 14:35:05.210933 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-08-29 14:35:05.210936 | orchestrator | Friday 29 August 2025 14:34:44 +0000 (0:00:00.607) 0:06:44.779 ********* 2025-08-29 14:35:05.210941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:35:05.210946 | orchestrator | 2025-08-29 14:35:05.210962 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-08-29 14:35:05.210965 | orchestrator | Friday 29 August 2025 14:34:45 +0000 (0:00:01.132) 0:06:45.912 ********* 2025-08-29 14:35:05.210969 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:05.210973 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:05.210977 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:05.210980 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:05.210984 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:05.210988 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:05.210991 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:05.210995 | orchestrator | 2025-08-29 14:35:05.210999 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-08-29 14:35:05.211003 | orchestrator | Friday 29 August 2025 14:34:46 +0000 (0:00:00.858) 0:06:46.770 ********* 2025-08-29 14:35:05.211007 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:05.211010 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:05.211014 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:05.211018 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:05.211022 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:05.211025 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:05.211029 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:05.211033 | orchestrator | 2025-08-29 14:35:05.211037 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-08-29 14:35:05.211051 | orchestrator | Friday 29 August 2025 14:34:47 +0000 (0:00:00.887) 0:06:47.658 ********* 2025-08-29 14:35:05.211055 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:05.211059 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:05.211063 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:05.211066 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:05.211070 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:05.211074 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:05.211077 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:05.211081 | orchestrator | 2025-08-29 14:35:05.211085 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-08-29 14:35:05.211089 | orchestrator | Friday 29 August 2025 14:34:47 +0000 (0:00:00.554) 0:06:48.212 ********* 2025-08-29 14:35:05.211092 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:05.211103 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:05.211108 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:05.211111 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:05.211115 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:05.211119 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:05.211122 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:05.211126 | orchestrator | 2025-08-29 14:35:05.211130 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-08-29 14:35:05.211134 | orchestrator | Friday 29 August 2025 14:34:49 +0000 (0:00:01.707) 0:06:49.920 ********* 2025-08-29 14:35:05.211138 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:05.211142 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:05.211147 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:05.211151 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:05.211155 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:05.211159 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:05.211163 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:05.211168 | orchestrator | 2025-08-29 14:35:05.211172 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-08-29 14:35:05.211176 | orchestrator | Friday 29 August 2025 14:34:50 +0000 (0:00:00.603) 0:06:50.523 ********* 2025-08-29 14:35:05.211180 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:05.211185 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:05.211189 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:05.211193 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:05.211197 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:05.211201 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:05.211205 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:05.211209 | orchestrator | 2025-08-29 14:35:05.211214 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-08-29 14:35:05.211218 | orchestrator | Friday 29 August 2025 14:34:57 +0000 (0:00:07.359) 0:06:57.882 ********* 2025-08-29 14:35:05.211222 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:05.211226 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:05.211231 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:05.211235 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:05.211239 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:05.211243 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:05.211247 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:05.211251 | orchestrator | 2025-08-29 14:35:05.211256 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-08-29 14:35:05.211260 | orchestrator | Friday 29 August 2025 14:34:58 +0000 (0:00:01.332) 0:06:59.215 ********* 2025-08-29 14:35:05.211264 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:05.211268 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:05.211272 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:05.211276 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:05.211280 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:05.211284 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:05.211289 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:05.211293 | orchestrator | 2025-08-29 14:35:05.211297 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-08-29 14:35:05.211301 | orchestrator | Friday 29 August 2025 14:35:00 +0000 (0:00:01.969) 0:07:01.185 ********* 2025-08-29 14:35:05.211306 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:05.211310 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:05.211314 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:05.211318 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:05.211322 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:05.211326 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:05.211331 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:05.211335 | orchestrator | 2025-08-29 14:35:05.211339 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 14:35:05.211343 | orchestrator | Friday 29 August 2025 14:35:02 +0000 (0:00:01.796) 0:07:02.982 ********* 2025-08-29 14:35:05.211352 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:05.211356 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:05.211360 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:05.211365 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:05.211369 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:05.211373 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:05.211377 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:05.211381 | orchestrator | 2025-08-29 14:35:05.211388 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 14:35:05.211393 | orchestrator | Friday 29 August 2025 14:35:03 +0000 (0:00:00.889) 0:07:03.871 ********* 2025-08-29 14:35:05.211397 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:05.211402 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:05.211406 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:05.211410 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:05.211414 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:05.211418 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:05.211422 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:05.211426 | orchestrator | 2025-08-29 14:35:05.211431 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-08-29 14:35:05.211435 | orchestrator | Friday 29 August 2025 14:35:04 +0000 (0:00:01.074) 0:07:04.946 ********* 2025-08-29 14:35:05.211439 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:05.211443 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:05.211447 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:05.211451 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:05.211456 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:05.211460 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:05.211464 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:05.211468 | orchestrator | 2025-08-29 14:35:05.211475 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-08-29 14:35:38.769676 | orchestrator | Friday 29 August 2025 14:35:05 +0000 (0:00:00.544) 0:07:05.491 ********* 2025-08-29 14:35:38.769873 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:38.769892 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:38.769904 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:38.769915 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:38.769926 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:38.769936 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:38.769996 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:38.770010 | orchestrator | 2025-08-29 14:35:38.770078 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-08-29 14:35:38.770091 | orchestrator | Friday 29 August 2025 14:35:05 +0000 (0:00:00.504) 0:07:05.995 ********* 2025-08-29 14:35:38.770103 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:38.770115 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:38.770126 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:38.770137 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:38.770148 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:38.770159 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:38.770170 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:38.770180 | orchestrator | 2025-08-29 14:35:38.770192 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-08-29 14:35:38.770203 | orchestrator | Friday 29 August 2025 14:35:06 +0000 (0:00:00.502) 0:07:06.497 ********* 2025-08-29 14:35:38.770214 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:38.770225 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:38.770236 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:38.770247 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:38.770257 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:38.770268 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:38.770278 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:38.770289 | orchestrator | 2025-08-29 14:35:38.770300 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-08-29 14:35:38.770342 | orchestrator | Friday 29 August 2025 14:35:06 +0000 (0:00:00.523) 0:07:07.020 ********* 2025-08-29 14:35:38.770354 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:38.770365 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:38.770375 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:38.770386 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:38.770396 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:38.770407 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:38.770417 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:38.770428 | orchestrator | 2025-08-29 14:35:38.770439 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-08-29 14:35:38.770450 | orchestrator | Friday 29 August 2025 14:35:12 +0000 (0:00:05.811) 0:07:12.831 ********* 2025-08-29 14:35:38.770461 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:38.770473 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:38.770483 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:38.770494 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:38.770505 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:38.770516 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:38.770526 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:38.770537 | orchestrator | 2025-08-29 14:35:38.770547 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-08-29 14:35:38.770558 | orchestrator | Friday 29 August 2025 14:35:13 +0000 (0:00:00.563) 0:07:13.395 ********* 2025-08-29 14:35:38.770571 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:35:38.770606 | orchestrator | 2025-08-29 14:35:38.770618 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-08-29 14:35:38.770629 | orchestrator | Friday 29 August 2025 14:35:13 +0000 (0:00:00.847) 0:07:14.242 ********* 2025-08-29 14:35:38.770639 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:38.770650 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:38.770661 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:38.770671 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:38.770682 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:38.770692 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:38.770703 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:38.770714 | orchestrator | 2025-08-29 14:35:38.770725 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-08-29 14:35:38.770736 | orchestrator | Friday 29 August 2025 14:35:15 +0000 (0:00:02.006) 0:07:16.249 ********* 2025-08-29 14:35:38.770747 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:38.770758 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:38.770768 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:38.770779 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:38.770789 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:38.770800 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:38.770810 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:38.770821 | orchestrator | 2025-08-29 14:35:38.770832 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-08-29 14:35:38.770860 | orchestrator | Friday 29 August 2025 14:35:17 +0000 (0:00:01.163) 0:07:17.412 ********* 2025-08-29 14:35:38.770872 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:38.770883 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:38.770893 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:38.770904 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:38.770914 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:38.770925 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:38.770935 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:38.770946 | orchestrator | 2025-08-29 14:35:38.770957 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-08-29 14:35:38.770968 | orchestrator | Friday 29 August 2025 14:35:17 +0000 (0:00:00.827) 0:07:18.239 ********* 2025-08-29 14:35:38.770994 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:35:38.771008 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:35:38.771019 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:35:38.771051 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:35:38.771063 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:35:38.771074 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:35:38.771084 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:35:38.771095 | orchestrator | 2025-08-29 14:35:38.771106 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-08-29 14:35:38.771117 | orchestrator | Friday 29 August 2025 14:35:19 +0000 (0:00:01.861) 0:07:20.101 ********* 2025-08-29 14:35:38.771128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:35:38.771139 | orchestrator | 2025-08-29 14:35:38.771149 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-08-29 14:35:38.771160 | orchestrator | Friday 29 August 2025 14:35:20 +0000 (0:00:01.088) 0:07:21.190 ********* 2025-08-29 14:35:38.771171 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:38.771182 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:38.771192 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:38.771203 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:38.771213 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:38.771224 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:38.771234 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:38.771245 | orchestrator | 2025-08-29 14:35:38.771256 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-08-29 14:35:38.771266 | orchestrator | Friday 29 August 2025 14:35:30 +0000 (0:00:09.495) 0:07:30.685 ********* 2025-08-29 14:35:38.771277 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:38.771287 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:38.771298 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:38.771309 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:38.771319 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:38.771342 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:38.771353 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:38.771364 | orchestrator | 2025-08-29 14:35:38.771375 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-08-29 14:35:38.771386 | orchestrator | Friday 29 August 2025 14:35:32 +0000 (0:00:02.030) 0:07:32.715 ********* 2025-08-29 14:35:38.771396 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:38.771407 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:38.771418 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:38.771428 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:38.771439 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:38.771449 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:38.771460 | orchestrator | 2025-08-29 14:35:38.771471 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-08-29 14:35:38.771482 | orchestrator | Friday 29 August 2025 14:35:33 +0000 (0:00:01.319) 0:07:34.035 ********* 2025-08-29 14:35:38.771493 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:38.771511 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:38.771522 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:38.771533 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:38.771543 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:38.771554 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:38.771564 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:38.771575 | orchestrator | 2025-08-29 14:35:38.771638 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-08-29 14:35:38.771649 | orchestrator | 2025-08-29 14:35:38.771660 | orchestrator | TASK [Include hardening role] ************************************************** 2025-08-29 14:35:38.771671 | orchestrator | Friday 29 August 2025 14:35:35 +0000 (0:00:01.323) 0:07:35.358 ********* 2025-08-29 14:35:38.771682 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:38.771692 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:38.771703 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:38.771713 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:38.771724 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:38.771734 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:38.771745 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:38.771755 | orchestrator | 2025-08-29 14:35:38.771766 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-08-29 14:35:38.771777 | orchestrator | 2025-08-29 14:35:38.771788 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-08-29 14:35:38.771799 | orchestrator | Friday 29 August 2025 14:35:35 +0000 (0:00:00.592) 0:07:35.951 ********* 2025-08-29 14:35:38.771810 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:38.771820 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:38.771831 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:38.771842 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:38.771852 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:38.771863 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:38.771873 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:38.771884 | orchestrator | 2025-08-29 14:35:38.771894 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-08-29 14:35:38.771905 | orchestrator | Friday 29 August 2025 14:35:37 +0000 (0:00:01.612) 0:07:37.563 ********* 2025-08-29 14:35:38.771965 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:38.771977 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:38.771987 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:38.771998 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:38.772009 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:38.772019 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:38.772030 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:38.772040 | orchestrator | 2025-08-29 14:35:38.772051 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-08-29 14:35:38.772069 | orchestrator | Friday 29 August 2025 14:35:38 +0000 (0:00:01.479) 0:07:39.043 ********* 2025-08-29 14:36:03.566095 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:03.566207 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:03.566219 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:03.566226 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:03.566233 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:03.566240 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:03.566246 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:03.566254 | orchestrator | 2025-08-29 14:36:03.566262 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-08-29 14:36:03.566269 | orchestrator | Friday 29 August 2025 14:35:39 +0000 (0:00:00.538) 0:07:39.581 ********* 2025-08-29 14:36:03.566277 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:03.566285 | orchestrator | 2025-08-29 14:36:03.566292 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-08-29 14:36:03.566320 | orchestrator | Friday 29 August 2025 14:35:40 +0000 (0:00:01.118) 0:07:40.700 ********* 2025-08-29 14:36:03.566329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:03.566337 | orchestrator | 2025-08-29 14:36:03.566343 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-08-29 14:36:03.566350 | orchestrator | Friday 29 August 2025 14:35:41 +0000 (0:00:00.942) 0:07:41.643 ********* 2025-08-29 14:36:03.566356 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:03.566362 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:03.566367 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:03.566373 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:03.566379 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:03.566385 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:03.566391 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:03.566397 | orchestrator | 2025-08-29 14:36:03.566403 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-08-29 14:36:03.566409 | orchestrator | Friday 29 August 2025 14:35:50 +0000 (0:00:09.056) 0:07:50.699 ********* 2025-08-29 14:36:03.566415 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:03.566422 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:03.566428 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:03.566434 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:03.566440 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:03.566446 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:03.566453 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:03.566459 | orchestrator | 2025-08-29 14:36:03.566465 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-08-29 14:36:03.566471 | orchestrator | Friday 29 August 2025 14:35:51 +0000 (0:00:00.924) 0:07:51.624 ********* 2025-08-29 14:36:03.566478 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:03.566484 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:03.566491 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:03.566498 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:03.566504 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:03.566510 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:03.566517 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:03.566523 | orchestrator | 2025-08-29 14:36:03.566529 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-08-29 14:36:03.566536 | orchestrator | Friday 29 August 2025 14:35:52 +0000 (0:00:01.613) 0:07:53.238 ********* 2025-08-29 14:36:03.566543 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:03.566549 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:03.566555 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:03.566562 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:03.566589 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:03.566596 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:03.566603 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:03.566609 | orchestrator | 2025-08-29 14:36:03.566616 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-08-29 14:36:03.566623 | orchestrator | Friday 29 August 2025 14:35:54 +0000 (0:00:01.892) 0:07:55.130 ********* 2025-08-29 14:36:03.566631 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:03.566638 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:03.566645 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:03.566652 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:03.566658 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:03.566665 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:03.566671 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:03.566678 | orchestrator | 2025-08-29 14:36:03.566699 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-08-29 14:36:03.566714 | orchestrator | Friday 29 August 2025 14:35:56 +0000 (0:00:01.265) 0:07:56.396 ********* 2025-08-29 14:36:03.566721 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:03.566727 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:03.566733 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:03.566739 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:03.566745 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:03.566751 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:03.566757 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:03.566762 | orchestrator | 2025-08-29 14:36:03.566768 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-08-29 14:36:03.566774 | orchestrator | 2025-08-29 14:36:03.566780 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-08-29 14:36:03.566786 | orchestrator | Friday 29 August 2025 14:35:57 +0000 (0:00:01.359) 0:07:57.756 ********* 2025-08-29 14:36:03.566792 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:03.566798 | orchestrator | 2025-08-29 14:36:03.566804 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 14:36:03.566828 | orchestrator | Friday 29 August 2025 14:35:58 +0000 (0:00:00.873) 0:07:58.629 ********* 2025-08-29 14:36:03.566835 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:03.566843 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:03.566848 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:03.566854 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:03.566860 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:03.566866 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:03.566872 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:03.566878 | orchestrator | 2025-08-29 14:36:03.566884 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 14:36:03.566890 | orchestrator | Friday 29 August 2025 14:35:59 +0000 (0:00:00.875) 0:07:59.504 ********* 2025-08-29 14:36:03.566896 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:03.566901 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:03.566907 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:03.566914 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:03.566921 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:03.566927 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:03.566934 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:03.566940 | orchestrator | 2025-08-29 14:36:03.566947 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-08-29 14:36:03.566954 | orchestrator | Friday 29 August 2025 14:36:00 +0000 (0:00:01.337) 0:08:00.841 ********* 2025-08-29 14:36:03.566960 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:03.566967 | orchestrator | 2025-08-29 14:36:03.566973 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 14:36:03.566980 | orchestrator | Friday 29 August 2025 14:36:01 +0000 (0:00:00.812) 0:08:01.654 ********* 2025-08-29 14:36:03.566986 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:03.566992 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:03.566999 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:03.567006 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:03.567012 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:03.567021 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:03.567028 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:03.567034 | orchestrator | 2025-08-29 14:36:03.567041 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 14:36:03.567048 | orchestrator | Friday 29 August 2025 14:36:02 +0000 (0:00:00.793) 0:08:02.447 ********* 2025-08-29 14:36:03.567054 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:03.567061 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:03.567067 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:03.567082 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:03.567089 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:03.567094 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:03.567100 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:03.567107 | orchestrator | 2025-08-29 14:36:03.567113 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:36:03.567121 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-08-29 14:36:03.567128 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-08-29 14:36:03.567134 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:36:03.567141 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:36:03.567147 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:36:03.567154 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:36:03.567161 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:36:03.567167 | orchestrator | 2025-08-29 14:36:03.567174 | orchestrator | 2025-08-29 14:36:03.567181 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:36:03.567188 | orchestrator | Friday 29 August 2025 14:36:03 +0000 (0:00:01.383) 0:08:03.831 ********* 2025-08-29 14:36:03.567194 | orchestrator | =============================================================================== 2025-08-29 14:36:03.567201 | orchestrator | osism.commons.packages : Install required packages --------------------- 75.40s 2025-08-29 14:36:03.567207 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.03s 2025-08-29 14:36:03.567214 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.14s 2025-08-29 14:36:03.567220 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.52s 2025-08-29 14:36:03.567226 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.89s 2025-08-29 14:36:03.567234 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.95s 2025-08-29 14:36:03.567240 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.94s 2025-08-29 14:36:03.567246 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.55s 2025-08-29 14:36:03.567253 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.50s 2025-08-29 14:36:03.567259 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.06s 2025-08-29 14:36:03.567275 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.60s 2025-08-29 14:36:04.140497 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.43s 2025-08-29 14:36:04.140696 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.77s 2025-08-29 14:36:04.140713 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.70s 2025-08-29 14:36:04.140725 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.70s 2025-08-29 14:36:04.140736 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.36s 2025-08-29 14:36:04.140766 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.69s 2025-08-29 14:36:04.140814 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.91s 2025-08-29 14:36:04.140850 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.86s 2025-08-29 14:36:04.140861 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.81s 2025-08-29 14:36:04.522099 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-08-29 14:36:04.522201 | orchestrator | + osism apply network 2025-08-29 14:36:17.399037 | orchestrator | 2025-08-29 14:36:17 | INFO  | Task 832a972c-43a1-4889-9c7d-6b9e79809391 (network) was prepared for execution. 2025-08-29 14:36:17.399155 | orchestrator | 2025-08-29 14:36:17 | INFO  | It takes a moment until task 832a972c-43a1-4889-9c7d-6b9e79809391 (network) has been started and output is visible here. 2025-08-29 14:36:47.154443 | orchestrator | 2025-08-29 14:36:47.154592 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-08-29 14:36:47.154608 | orchestrator | 2025-08-29 14:36:47.154618 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-08-29 14:36:47.154627 | orchestrator | Friday 29 August 2025 14:36:22 +0000 (0:00:00.291) 0:00:00.291 ********* 2025-08-29 14:36:47.154636 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:47.154646 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:47.154655 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:47.154664 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:47.154672 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:47.154681 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:47.154689 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:47.154699 | orchestrator | 2025-08-29 14:36:47.154707 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-08-29 14:36:47.154716 | orchestrator | Friday 29 August 2025 14:36:22 +0000 (0:00:00.747) 0:00:01.038 ********* 2025-08-29 14:36:47.154726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:47.154738 | orchestrator | 2025-08-29 14:36:47.154746 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-08-29 14:36:47.154755 | orchestrator | Friday 29 August 2025 14:36:24 +0000 (0:00:01.272) 0:00:02.310 ********* 2025-08-29 14:36:47.154764 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:47.154772 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:47.154781 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:47.154789 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:47.154797 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:47.154806 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:47.154814 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:47.154823 | orchestrator | 2025-08-29 14:36:47.154832 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-08-29 14:36:47.154840 | orchestrator | Friday 29 August 2025 14:36:26 +0000 (0:00:01.959) 0:00:04.270 ********* 2025-08-29 14:36:47.154849 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:47.154857 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:47.154866 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:47.154874 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:47.154883 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:47.154891 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:47.154900 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:47.154907 | orchestrator | 2025-08-29 14:36:47.154915 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-08-29 14:36:47.154923 | orchestrator | Friday 29 August 2025 14:36:27 +0000 (0:00:01.745) 0:00:06.015 ********* 2025-08-29 14:36:47.154931 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-08-29 14:36:47.154940 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-08-29 14:36:47.154949 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-08-29 14:36:47.154973 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-08-29 14:36:47.154983 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-08-29 14:36:47.155012 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-08-29 14:36:47.155021 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-08-29 14:36:47.155029 | orchestrator | 2025-08-29 14:36:47.155038 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-08-29 14:36:47.155046 | orchestrator | Friday 29 August 2025 14:36:28 +0000 (0:00:00.962) 0:00:06.978 ********* 2025-08-29 14:36:47.155055 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:36:47.155065 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 14:36:47.155073 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 14:36:47.155082 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 14:36:47.155090 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 14:36:47.155099 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 14:36:47.155107 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 14:36:47.155116 | orchestrator | 2025-08-29 14:36:47.155124 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-08-29 14:36:47.155133 | orchestrator | Friday 29 August 2025 14:36:32 +0000 (0:00:03.798) 0:00:10.776 ********* 2025-08-29 14:36:47.155141 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:47.155149 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:47.155157 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:47.155166 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:47.155174 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:47.155183 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:47.155192 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:47.155200 | orchestrator | 2025-08-29 14:36:47.155209 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-08-29 14:36:47.155217 | orchestrator | Friday 29 August 2025 14:36:34 +0000 (0:00:01.464) 0:00:12.241 ********* 2025-08-29 14:36:47.155226 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:36:47.155235 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 14:36:47.155243 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 14:36:47.155252 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 14:36:47.155261 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 14:36:47.155269 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 14:36:47.155277 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 14:36:47.155284 | orchestrator | 2025-08-29 14:36:47.155292 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-08-29 14:36:47.155300 | orchestrator | Friday 29 August 2025 14:36:36 +0000 (0:00:02.079) 0:00:14.321 ********* 2025-08-29 14:36:47.155307 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:47.155315 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:47.155323 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:47.155330 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:47.155338 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:47.155345 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:47.155353 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:47.155361 | orchestrator | 2025-08-29 14:36:47.155368 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-08-29 14:36:47.155392 | orchestrator | Friday 29 August 2025 14:36:37 +0000 (0:00:01.158) 0:00:15.479 ********* 2025-08-29 14:36:47.155400 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:47.155408 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:47.155416 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:47.155423 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:47.155431 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:47.155438 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:47.155446 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:47.155453 | orchestrator | 2025-08-29 14:36:47.155461 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-08-29 14:36:47.155469 | orchestrator | Friday 29 August 2025 14:36:37 +0000 (0:00:00.662) 0:00:16.142 ********* 2025-08-29 14:36:47.155477 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:47.155490 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:47.155498 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:47.155506 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:47.155513 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:47.155521 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:47.155528 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:47.155536 | orchestrator | 2025-08-29 14:36:47.155563 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-08-29 14:36:47.155571 | orchestrator | Friday 29 August 2025 14:36:40 +0000 (0:00:02.130) 0:00:18.272 ********* 2025-08-29 14:36:47.155579 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:47.155587 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:47.155594 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:47.155602 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:47.155610 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:47.155618 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:47.155626 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-08-29 14:36:47.155635 | orchestrator | 2025-08-29 14:36:47.155642 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-08-29 14:36:47.155650 | orchestrator | Friday 29 August 2025 14:36:41 +0000 (0:00:00.974) 0:00:19.247 ********* 2025-08-29 14:36:47.155658 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:47.155666 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:47.155673 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:47.155681 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:47.155688 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:47.155696 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:47.155703 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:47.155711 | orchestrator | 2025-08-29 14:36:47.155718 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-08-29 14:36:47.155726 | orchestrator | Friday 29 August 2025 14:36:42 +0000 (0:00:01.615) 0:00:20.863 ********* 2025-08-29 14:36:47.155738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:47.155748 | orchestrator | 2025-08-29 14:36:47.155756 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 14:36:47.155764 | orchestrator | Friday 29 August 2025 14:36:43 +0000 (0:00:01.347) 0:00:22.210 ********* 2025-08-29 14:36:47.155771 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:47.155779 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:47.155786 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:47.155794 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:47.155802 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:47.155809 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:47.155817 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:47.155825 | orchestrator | 2025-08-29 14:36:47.155832 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-08-29 14:36:47.155840 | orchestrator | Friday 29 August 2025 14:36:44 +0000 (0:00:00.990) 0:00:23.201 ********* 2025-08-29 14:36:47.155848 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:47.155855 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:47.155863 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:47.155871 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:47.155878 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:47.155886 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:47.155893 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:47.155901 | orchestrator | 2025-08-29 14:36:47.155909 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 14:36:47.155916 | orchestrator | Friday 29 August 2025 14:36:45 +0000 (0:00:00.886) 0:00:24.087 ********* 2025-08-29 14:36:47.155924 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:36:47.155938 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:36:47.155946 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:36:47.155953 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:36:47.155961 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:36:47.155968 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:36:47.155976 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:36:47.155984 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:36:47.155991 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:36:47.155999 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:36:47.156006 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:36:47.156014 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:36:47.156022 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:36:47.156029 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:36:47.156037 | orchestrator | 2025-08-29 14:36:47.156050 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-08-29 14:37:06.063802 | orchestrator | Friday 29 August 2025 14:36:47 +0000 (0:00:01.268) 0:00:25.355 ********* 2025-08-29 14:37:06.063934 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:06.063959 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:06.063977 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:06.063993 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:06.064008 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:06.064025 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:06.064040 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:06.064054 | orchestrator | 2025-08-29 14:37:06.064070 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-08-29 14:37:06.064084 | orchestrator | Friday 29 August 2025 14:36:47 +0000 (0:00:00.672) 0:00:26.027 ********* 2025-08-29 14:37:06.064100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-5, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-4 2025-08-29 14:37:06.064118 | orchestrator | 2025-08-29 14:37:06.064132 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-08-29 14:37:06.064147 | orchestrator | Friday 29 August 2025 14:36:53 +0000 (0:00:05.244) 0:00:31.271 ********* 2025-08-29 14:37:06.064166 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064221 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:06.064282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:06.064299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064332 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:06.064365 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:06.064422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:06.064439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:06.064456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:06.064472 | orchestrator | 2025-08-29 14:37:06.064488 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-08-29 14:37:06.064504 | orchestrator | Friday 29 August 2025 14:36:59 +0000 (0:00:06.724) 0:00:37.995 ********* 2025-08-29 14:37:06.064520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064656 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:06.064692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:06.064727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:37:06.064745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:06.064763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:06.064778 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:06.064809 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:13.179089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:37:13.179233 | orchestrator | 2025-08-29 14:37:13.179263 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-08-29 14:37:13.179285 | orchestrator | Friday 29 August 2025 14:37:06 +0000 (0:00:06.262) 0:00:44.257 ********* 2025-08-29 14:37:13.179302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:37:13.179314 | orchestrator | 2025-08-29 14:37:13.179325 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 14:37:13.179336 | orchestrator | Friday 29 August 2025 14:37:07 +0000 (0:00:01.365) 0:00:45.623 ********* 2025-08-29 14:37:13.179372 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:13.179384 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:13.179395 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:13.179405 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:13.179416 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:13.179426 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:13.179436 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:13.179447 | orchestrator | 2025-08-29 14:37:13.179458 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 14:37:13.179469 | orchestrator | Friday 29 August 2025 14:37:08 +0000 (0:00:01.219) 0:00:46.842 ********* 2025-08-29 14:37:13.179480 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:37:13.179492 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:37:13.179502 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:37:13.179513 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:37:13.179523 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:13.179567 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:37:13.179580 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:37:13.179609 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:37:13.179621 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:37:13.179633 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:13.179646 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:37:13.179657 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:37:13.179669 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:37:13.179681 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:37:13.179693 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:13.179706 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:37:13.179717 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:37:13.179729 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:37:13.179741 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:37:13.179753 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:37:13.179765 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:37:13.179777 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:37:13.179789 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:37:13.179801 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:13.179814 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:37:13.179825 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:37:13.179838 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:37:13.179857 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:37:13.179875 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:13.179894 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:13.179912 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:37:13.179933 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:37:13.179967 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:37:13.179987 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:37:13.180008 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:13.180027 | orchestrator | 2025-08-29 14:37:13.180045 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-08-29 14:37:13.180078 | orchestrator | Friday 29 August 2025 14:37:10 +0000 (0:00:02.322) 0:00:49.164 ********* 2025-08-29 14:37:13.180089 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:13.180100 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:13.180111 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:13.180121 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:13.180132 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:13.180142 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:13.180159 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:13.180185 | orchestrator | 2025-08-29 14:37:13.180206 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-08-29 14:37:13.180223 | orchestrator | Friday 29 August 2025 14:37:11 +0000 (0:00:00.735) 0:00:49.900 ********* 2025-08-29 14:37:13.180241 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:13.180258 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:13.180275 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:13.180292 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:13.180312 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:13.180330 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:13.180348 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:13.180365 | orchestrator | 2025-08-29 14:37:13.180377 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:37:13.180389 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:37:13.180401 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:37:13.180412 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:37:13.180423 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:37:13.180434 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:37:13.180444 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:37:13.180463 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:37:13.180474 | orchestrator | 2025-08-29 14:37:13.180485 | orchestrator | 2025-08-29 14:37:13.180496 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:37:13.180507 | orchestrator | Friday 29 August 2025 14:37:12 +0000 (0:00:00.996) 0:00:50.897 ********* 2025-08-29 14:37:13.180517 | orchestrator | =============================================================================== 2025-08-29 14:37:13.180528 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.72s 2025-08-29 14:37:13.180612 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.26s 2025-08-29 14:37:13.180623 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.24s 2025-08-29 14:37:13.180634 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.80s 2025-08-29 14:37:13.180644 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.32s 2025-08-29 14:37:13.180665 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.13s 2025-08-29 14:37:13.180676 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.08s 2025-08-29 14:37:13.180686 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.96s 2025-08-29 14:37:13.180697 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.75s 2025-08-29 14:37:13.180707 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.62s 2025-08-29 14:37:13.180718 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.47s 2025-08-29 14:37:13.180728 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.37s 2025-08-29 14:37:13.180739 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.35s 2025-08-29 14:37:13.180749 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.27s 2025-08-29 14:37:13.180759 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.27s 2025-08-29 14:37:13.180770 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.22s 2025-08-29 14:37:13.180781 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.16s 2025-08-29 14:37:13.180791 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 1.00s 2025-08-29 14:37:13.180802 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2025-08-29 14:37:13.180813 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.97s 2025-08-29 14:37:13.633273 | orchestrator | + osism apply wireguard 2025-08-29 14:37:25.954996 | orchestrator | 2025-08-29 14:37:25 | INFO  | Task 39758e39-6dc1-4757-a6d0-c72460ba95fb (wireguard) was prepared for execution. 2025-08-29 14:37:25.955109 | orchestrator | 2025-08-29 14:37:25 | INFO  | It takes a moment until task 39758e39-6dc1-4757-a6d0-c72460ba95fb (wireguard) has been started and output is visible here. 2025-08-29 14:37:47.520145 | orchestrator | 2025-08-29 14:37:47.520257 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-08-29 14:37:47.520273 | orchestrator | 2025-08-29 14:37:47.520286 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-08-29 14:37:47.520298 | orchestrator | Friday 29 August 2025 14:37:30 +0000 (0:00:00.228) 0:00:00.228 ********* 2025-08-29 14:37:47.520309 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:47.520322 | orchestrator | 2025-08-29 14:37:47.520333 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-08-29 14:37:47.520344 | orchestrator | Friday 29 August 2025 14:37:32 +0000 (0:00:01.699) 0:00:01.928 ********* 2025-08-29 14:37:47.520355 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:47.520366 | orchestrator | 2025-08-29 14:37:47.520377 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-08-29 14:37:47.520388 | orchestrator | Friday 29 August 2025 14:37:39 +0000 (0:00:07.165) 0:00:09.093 ********* 2025-08-29 14:37:47.520398 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:47.520409 | orchestrator | 2025-08-29 14:37:47.520420 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-08-29 14:37:47.520431 | orchestrator | Friday 29 August 2025 14:37:39 +0000 (0:00:00.602) 0:00:09.696 ********* 2025-08-29 14:37:47.520442 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:47.520453 | orchestrator | 2025-08-29 14:37:47.520463 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-08-29 14:37:47.520474 | orchestrator | Friday 29 August 2025 14:37:40 +0000 (0:00:00.489) 0:00:10.185 ********* 2025-08-29 14:37:47.520485 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:47.520495 | orchestrator | 2025-08-29 14:37:47.520506 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-08-29 14:37:47.520562 | orchestrator | Friday 29 August 2025 14:37:40 +0000 (0:00:00.581) 0:00:10.767 ********* 2025-08-29 14:37:47.520600 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:47.520612 | orchestrator | 2025-08-29 14:37:47.520623 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-08-29 14:37:47.520633 | orchestrator | Friday 29 August 2025 14:37:41 +0000 (0:00:00.639) 0:00:11.406 ********* 2025-08-29 14:37:47.520644 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:47.520654 | orchestrator | 2025-08-29 14:37:47.520665 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-08-29 14:37:47.520675 | orchestrator | Friday 29 August 2025 14:37:42 +0000 (0:00:00.550) 0:00:11.956 ********* 2025-08-29 14:37:47.520685 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:47.520696 | orchestrator | 2025-08-29 14:37:47.520706 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-08-29 14:37:47.520731 | orchestrator | Friday 29 August 2025 14:37:43 +0000 (0:00:01.287) 0:00:13.244 ********* 2025-08-29 14:37:47.520742 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:37:47.520753 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:47.520764 | orchestrator | 2025-08-29 14:37:47.520775 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-08-29 14:37:47.520786 | orchestrator | Friday 29 August 2025 14:37:44 +0000 (0:00:01.043) 0:00:14.287 ********* 2025-08-29 14:37:47.520796 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:47.520807 | orchestrator | 2025-08-29 14:37:47.520817 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-08-29 14:37:47.520828 | orchestrator | Friday 29 August 2025 14:37:46 +0000 (0:00:01.797) 0:00:16.085 ********* 2025-08-29 14:37:47.520839 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:47.520849 | orchestrator | 2025-08-29 14:37:47.520860 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:37:47.520871 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:37:47.520883 | orchestrator | 2025-08-29 14:37:47.520893 | orchestrator | 2025-08-29 14:37:47.520904 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:37:47.520914 | orchestrator | Friday 29 August 2025 14:37:47 +0000 (0:00:00.972) 0:00:17.058 ********* 2025-08-29 14:37:47.520925 | orchestrator | =============================================================================== 2025-08-29 14:37:47.520936 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.17s 2025-08-29 14:37:47.520946 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.80s 2025-08-29 14:37:47.520957 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.70s 2025-08-29 14:37:47.520968 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.29s 2025-08-29 14:37:47.520978 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.04s 2025-08-29 14:37:47.520989 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.97s 2025-08-29 14:37:47.520999 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.64s 2025-08-29 14:37:47.521010 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.60s 2025-08-29 14:37:47.521021 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.58s 2025-08-29 14:37:47.521031 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.55s 2025-08-29 14:37:47.521042 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.49s 2025-08-29 14:37:47.930378 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-08-29 14:37:47.971474 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-08-29 14:37:47.971640 | orchestrator | Dload Upload Total Spent Left Speed 2025-08-29 14:37:48.044257 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 204 0 --:--:-- --:--:-- --:--:-- 205 2025-08-29 14:37:48.060142 | orchestrator | + osism apply --environment custom workarounds 2025-08-29 14:37:50.003362 | orchestrator | 2025-08-29 14:37:50 | INFO  | Trying to run play workarounds in environment custom 2025-08-29 14:38:00.127475 | orchestrator | 2025-08-29 14:38:00 | INFO  | Task bbc33c13-c41e-4519-a8d1-6025754ed53e (workarounds) was prepared for execution. 2025-08-29 14:38:00.127608 | orchestrator | 2025-08-29 14:38:00 | INFO  | It takes a moment until task bbc33c13-c41e-4519-a8d1-6025754ed53e (workarounds) has been started and output is visible here. 2025-08-29 14:38:25.910522 | orchestrator | 2025-08-29 14:38:25.910624 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:38:25.910638 | orchestrator | 2025-08-29 14:38:25.910649 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-08-29 14:38:25.910658 | orchestrator | Friday 29 August 2025 14:38:04 +0000 (0:00:00.159) 0:00:00.159 ********* 2025-08-29 14:38:25.910667 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-08-29 14:38:25.910676 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-08-29 14:38:25.910686 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-08-29 14:38:25.910694 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-08-29 14:38:25.910703 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-08-29 14:38:25.910711 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-08-29 14:38:25.910720 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-08-29 14:38:25.910729 | orchestrator | 2025-08-29 14:38:25.910737 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-08-29 14:38:25.910746 | orchestrator | 2025-08-29 14:38:25.910755 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 14:38:25.910763 | orchestrator | Friday 29 August 2025 14:38:05 +0000 (0:00:00.881) 0:00:01.040 ********* 2025-08-29 14:38:25.910772 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:25.910782 | orchestrator | 2025-08-29 14:38:25.910791 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-08-29 14:38:25.910800 | orchestrator | 2025-08-29 14:38:25.910808 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 14:38:25.910817 | orchestrator | Friday 29 August 2025 14:38:07 +0000 (0:00:02.511) 0:00:03.552 ********* 2025-08-29 14:38:25.910826 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:25.910850 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:25.910859 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:25.910867 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:25.910876 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:25.910885 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:25.910893 | orchestrator | 2025-08-29 14:38:25.910902 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-08-29 14:38:25.910910 | orchestrator | 2025-08-29 14:38:25.910919 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-08-29 14:38:25.910928 | orchestrator | Friday 29 August 2025 14:38:09 +0000 (0:00:01.762) 0:00:05.314 ********* 2025-08-29 14:38:25.910938 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:38:25.910948 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:38:25.910957 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:38:25.910966 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:38:25.910974 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:38:25.910983 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:38:25.911012 | orchestrator | 2025-08-29 14:38:25.911021 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-08-29 14:38:25.911030 | orchestrator | Friday 29 August 2025 14:38:11 +0000 (0:00:01.532) 0:00:06.847 ********* 2025-08-29 14:38:25.911038 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:25.911047 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:25.911057 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:25.911066 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:25.911076 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:25.911085 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:25.911095 | orchestrator | 2025-08-29 14:38:25.911105 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-08-29 14:38:25.911114 | orchestrator | Friday 29 August 2025 14:38:15 +0000 (0:00:03.831) 0:00:10.679 ********* 2025-08-29 14:38:25.911124 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:25.911133 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:25.911143 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:25.911152 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:25.911162 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:25.911172 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:25.911181 | orchestrator | 2025-08-29 14:38:25.911191 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-08-29 14:38:25.911201 | orchestrator | 2025-08-29 14:38:25.911210 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-08-29 14:38:25.911220 | orchestrator | Friday 29 August 2025 14:38:15 +0000 (0:00:00.799) 0:00:11.479 ********* 2025-08-29 14:38:25.911229 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:25.911239 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:25.911248 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:25.911258 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:25.911268 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:25.911277 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:25.911287 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:25.911296 | orchestrator | 2025-08-29 14:38:25.911306 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-08-29 14:38:25.911315 | orchestrator | Friday 29 August 2025 14:38:17 +0000 (0:00:01.600) 0:00:13.079 ********* 2025-08-29 14:38:25.911325 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:25.911334 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:25.911343 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:25.911353 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:25.911363 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:25.911372 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:25.911397 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:25.911407 | orchestrator | 2025-08-29 14:38:25.911417 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-08-29 14:38:25.911427 | orchestrator | Friday 29 August 2025 14:38:19 +0000 (0:00:01.596) 0:00:14.675 ********* 2025-08-29 14:38:25.911437 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:25.911445 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:25.911454 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:25.911462 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:25.911471 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:25.911479 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:25.911529 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:25.911539 | orchestrator | 2025-08-29 14:38:25.911548 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-08-29 14:38:25.911557 | orchestrator | Friday 29 August 2025 14:38:20 +0000 (0:00:01.495) 0:00:16.171 ********* 2025-08-29 14:38:25.911566 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:25.911574 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:25.911583 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:25.911598 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:25.911607 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:25.911616 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:25.911624 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:25.911632 | orchestrator | 2025-08-29 14:38:25.911641 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-08-29 14:38:25.911650 | orchestrator | Friday 29 August 2025 14:38:22 +0000 (0:00:01.807) 0:00:17.978 ********* 2025-08-29 14:38:25.911658 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:25.911666 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:25.911675 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:25.911683 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:25.911692 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:25.911700 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:25.911708 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:25.911717 | orchestrator | 2025-08-29 14:38:25.911725 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-08-29 14:38:25.911734 | orchestrator | 2025-08-29 14:38:25.911747 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-08-29 14:38:25.911756 | orchestrator | Friday 29 August 2025 14:38:23 +0000 (0:00:00.698) 0:00:18.677 ********* 2025-08-29 14:38:25.911764 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:25.911773 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:25.911781 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:25.911790 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:25.911798 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:25.911807 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:25.911815 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:25.911824 | orchestrator | 2025-08-29 14:38:25.911832 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:38:25.911842 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:38:25.911852 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:38:25.911860 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:38:25.911869 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:38:25.911877 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:38:25.911886 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:38:25.911894 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:38:25.911903 | orchestrator | 2025-08-29 14:38:25.911911 | orchestrator | 2025-08-29 14:38:25.911920 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:38:25.911929 | orchestrator | Friday 29 August 2025 14:38:25 +0000 (0:00:02.770) 0:00:21.448 ********* 2025-08-29 14:38:25.911937 | orchestrator | =============================================================================== 2025-08-29 14:38:25.911946 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.83s 2025-08-29 14:38:25.911954 | orchestrator | Install python3-docker -------------------------------------------------- 2.77s 2025-08-29 14:38:25.911963 | orchestrator | Apply netplan configuration --------------------------------------------- 2.51s 2025-08-29 14:38:25.911971 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.81s 2025-08-29 14:38:25.911986 | orchestrator | Apply netplan configuration --------------------------------------------- 1.76s 2025-08-29 14:38:25.911994 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.60s 2025-08-29 14:38:25.912003 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.60s 2025-08-29 14:38:25.912011 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.53s 2025-08-29 14:38:25.912020 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.50s 2025-08-29 14:38:25.912028 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.88s 2025-08-29 14:38:25.912037 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.80s 2025-08-29 14:38:25.912051 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.70s 2025-08-29 14:38:26.738576 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-08-29 14:38:38.812605 | orchestrator | 2025-08-29 14:38:38 | INFO  | Task 3f1861a9-9b06-4554-a6e5-ae547eef5a35 (reboot) was prepared for execution. 2025-08-29 14:38:38.812722 | orchestrator | 2025-08-29 14:38:38 | INFO  | It takes a moment until task 3f1861a9-9b06-4554-a6e5-ae547eef5a35 (reboot) has been started and output is visible here. 2025-08-29 14:38:49.861461 | orchestrator | 2025-08-29 14:38:49.861660 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:38:49.861678 | orchestrator | 2025-08-29 14:38:49.861690 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:38:49.861701 | orchestrator | Friday 29 August 2025 14:38:43 +0000 (0:00:00.225) 0:00:00.225 ********* 2025-08-29 14:38:49.861712 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:49.861724 | orchestrator | 2025-08-29 14:38:49.861735 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:38:49.861746 | orchestrator | Friday 29 August 2025 14:38:43 +0000 (0:00:00.153) 0:00:00.378 ********* 2025-08-29 14:38:49.861757 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:49.861767 | orchestrator | 2025-08-29 14:38:49.861778 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:38:49.861789 | orchestrator | Friday 29 August 2025 14:38:44 +0000 (0:00:01.027) 0:00:01.406 ********* 2025-08-29 14:38:49.861800 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:49.861810 | orchestrator | 2025-08-29 14:38:49.861821 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:38:49.861832 | orchestrator | 2025-08-29 14:38:49.861842 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:38:49.861853 | orchestrator | Friday 29 August 2025 14:38:44 +0000 (0:00:00.159) 0:00:01.565 ********* 2025-08-29 14:38:49.861864 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:49.861875 | orchestrator | 2025-08-29 14:38:49.861886 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:38:49.861897 | orchestrator | Friday 29 August 2025 14:38:44 +0000 (0:00:00.126) 0:00:01.692 ********* 2025-08-29 14:38:49.861907 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:49.861918 | orchestrator | 2025-08-29 14:38:49.861929 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:38:49.861939 | orchestrator | Friday 29 August 2025 14:38:45 +0000 (0:00:00.725) 0:00:02.418 ********* 2025-08-29 14:38:49.861950 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:49.861960 | orchestrator | 2025-08-29 14:38:49.861971 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:38:49.861982 | orchestrator | 2025-08-29 14:38:49.861992 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:38:49.862003 | orchestrator | Friday 29 August 2025 14:38:45 +0000 (0:00:00.126) 0:00:02.544 ********* 2025-08-29 14:38:49.862013 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:49.862117 | orchestrator | 2025-08-29 14:38:49.862129 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:38:49.862167 | orchestrator | Friday 29 August 2025 14:38:45 +0000 (0:00:00.290) 0:00:02.834 ********* 2025-08-29 14:38:49.862178 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:49.862188 | orchestrator | 2025-08-29 14:38:49.862199 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:38:49.862210 | orchestrator | Friday 29 August 2025 14:38:46 +0000 (0:00:00.679) 0:00:03.514 ********* 2025-08-29 14:38:49.862220 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:49.862231 | orchestrator | 2025-08-29 14:38:49.862241 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:38:49.862252 | orchestrator | 2025-08-29 14:38:49.862279 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:38:49.862290 | orchestrator | Friday 29 August 2025 14:38:46 +0000 (0:00:00.116) 0:00:03.631 ********* 2025-08-29 14:38:49.862301 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:49.862312 | orchestrator | 2025-08-29 14:38:49.862322 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:38:49.862333 | orchestrator | Friday 29 August 2025 14:38:46 +0000 (0:00:00.127) 0:00:03.759 ********* 2025-08-29 14:38:49.862344 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:49.862355 | orchestrator | 2025-08-29 14:38:49.862366 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:38:49.862376 | orchestrator | Friday 29 August 2025 14:38:47 +0000 (0:00:00.688) 0:00:04.447 ********* 2025-08-29 14:38:49.862387 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:49.862398 | orchestrator | 2025-08-29 14:38:49.862408 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:38:49.862419 | orchestrator | 2025-08-29 14:38:49.862430 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:38:49.862440 | orchestrator | Friday 29 August 2025 14:38:47 +0000 (0:00:00.128) 0:00:04.576 ********* 2025-08-29 14:38:49.862450 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:49.862461 | orchestrator | 2025-08-29 14:38:49.862494 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:38:49.862505 | orchestrator | Friday 29 August 2025 14:38:47 +0000 (0:00:00.129) 0:00:04.706 ********* 2025-08-29 14:38:49.862516 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:49.862526 | orchestrator | 2025-08-29 14:38:49.862537 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:38:49.862547 | orchestrator | Friday 29 August 2025 14:38:48 +0000 (0:00:00.703) 0:00:05.409 ********* 2025-08-29 14:38:49.862558 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:49.862568 | orchestrator | 2025-08-29 14:38:49.862578 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:38:49.862589 | orchestrator | 2025-08-29 14:38:49.862599 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:38:49.862610 | orchestrator | Friday 29 August 2025 14:38:48 +0000 (0:00:00.123) 0:00:05.533 ********* 2025-08-29 14:38:49.862620 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:49.862630 | orchestrator | 2025-08-29 14:38:49.862641 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:38:49.862651 | orchestrator | Friday 29 August 2025 14:38:48 +0000 (0:00:00.105) 0:00:05.638 ********* 2025-08-29 14:38:49.862662 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:49.862672 | orchestrator | 2025-08-29 14:38:49.862683 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:38:49.862693 | orchestrator | Friday 29 August 2025 14:38:49 +0000 (0:00:00.694) 0:00:06.334 ********* 2025-08-29 14:38:49.862724 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:49.862735 | orchestrator | 2025-08-29 14:38:49.862746 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:38:49.862758 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:38:49.862778 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:38:49.862789 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:38:49.862799 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:38:49.862810 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:38:49.862826 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:38:49.862837 | orchestrator | 2025-08-29 14:38:49.862848 | orchestrator | 2025-08-29 14:38:49.862858 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:38:49.862869 | orchestrator | Friday 29 August 2025 14:38:49 +0000 (0:00:00.029) 0:00:06.363 ********* 2025-08-29 14:38:49.862879 | orchestrator | =============================================================================== 2025-08-29 14:38:49.862890 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.52s 2025-08-29 14:38:49.862900 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.93s 2025-08-29 14:38:49.862910 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2025-08-29 14:38:50.264529 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-08-29 14:39:02.517941 | orchestrator | 2025-08-29 14:39:02 | INFO  | Task 35253c83-a466-44ec-8e56-cc0515eb45a5 (wait-for-connection) was prepared for execution. 2025-08-29 14:39:02.518154 | orchestrator | 2025-08-29 14:39:02 | INFO  | It takes a moment until task 35253c83-a466-44ec-8e56-cc0515eb45a5 (wait-for-connection) has been started and output is visible here. 2025-08-29 14:39:19.030574 | orchestrator | 2025-08-29 14:39:19.030739 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-08-29 14:39:19.030752 | orchestrator | 2025-08-29 14:39:19.030763 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-08-29 14:39:19.030772 | orchestrator | Friday 29 August 2025 14:39:06 +0000 (0:00:00.242) 0:00:00.242 ********* 2025-08-29 14:39:19.030781 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:19.030792 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:19.030801 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:19.030809 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:19.030818 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:19.030826 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:19.030835 | orchestrator | 2025-08-29 14:39:19.030843 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:39:19.030853 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:39:19.030863 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:39:19.030872 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:39:19.030881 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:39:19.030889 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:39:19.030898 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:39:19.030938 | orchestrator | 2025-08-29 14:39:19.030947 | orchestrator | 2025-08-29 14:39:19.030956 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:39:19.030964 | orchestrator | Friday 29 August 2025 14:39:18 +0000 (0:00:11.624) 0:00:11.867 ********* 2025-08-29 14:39:19.030973 | orchestrator | =============================================================================== 2025-08-29 14:39:19.030981 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.63s 2025-08-29 14:39:19.443774 | orchestrator | + osism apply hddtemp 2025-08-29 14:39:31.630761 | orchestrator | 2025-08-29 14:39:31 | INFO  | Task 38bb23ce-d626-47d1-95e6-498238191174 (hddtemp) was prepared for execution. 2025-08-29 14:39:31.630881 | orchestrator | 2025-08-29 14:39:31 | INFO  | It takes a moment until task 38bb23ce-d626-47d1-95e6-498238191174 (hddtemp) has been started and output is visible here. 2025-08-29 14:40:00.071836 | orchestrator | 2025-08-29 14:40:00.071929 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-08-29 14:40:00.071937 | orchestrator | 2025-08-29 14:40:00.071941 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-08-29 14:40:00.071946 | orchestrator | Friday 29 August 2025 14:39:36 +0000 (0:00:00.291) 0:00:00.291 ********* 2025-08-29 14:40:00.071951 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:00.071956 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:00.071960 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:00.071964 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:00.071968 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:00.071971 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:00.071975 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:00.071979 | orchestrator | 2025-08-29 14:40:00.071983 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-08-29 14:40:00.071986 | orchestrator | Friday 29 August 2025 14:39:36 +0000 (0:00:00.738) 0:00:01.030 ********* 2025-08-29 14:40:00.071991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:40:00.071997 | orchestrator | 2025-08-29 14:40:00.072001 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-08-29 14:40:00.072017 | orchestrator | Friday 29 August 2025 14:39:38 +0000 (0:00:01.249) 0:00:02.279 ********* 2025-08-29 14:40:00.072021 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:00.072024 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:00.072028 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:00.072032 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:00.072035 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:00.072039 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:00.072043 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:00.072046 | orchestrator | 2025-08-29 14:40:00.072050 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-08-29 14:40:00.072054 | orchestrator | Friday 29 August 2025 14:39:40 +0000 (0:00:02.094) 0:00:04.374 ********* 2025-08-29 14:40:00.072058 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:00.072062 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:00.072066 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:00.072070 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:00.072073 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:00.072077 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:00.072081 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:00.072084 | orchestrator | 2025-08-29 14:40:00.072088 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-08-29 14:40:00.072092 | orchestrator | Friday 29 August 2025 14:39:41 +0000 (0:00:01.173) 0:00:05.547 ********* 2025-08-29 14:40:00.072095 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:00.072099 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:00.072103 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:00.072120 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:00.072124 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:00.072128 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:00.072132 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:00.072135 | orchestrator | 2025-08-29 14:40:00.072139 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-08-29 14:40:00.072143 | orchestrator | Friday 29 August 2025 14:39:42 +0000 (0:00:01.187) 0:00:06.735 ********* 2025-08-29 14:40:00.072147 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:40:00.072150 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:40:00.072154 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:40:00.072158 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:00.072161 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:40:00.072165 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:40:00.072168 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:40:00.072172 | orchestrator | 2025-08-29 14:40:00.072176 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-08-29 14:40:00.072180 | orchestrator | Friday 29 August 2025 14:39:43 +0000 (0:00:00.826) 0:00:07.561 ********* 2025-08-29 14:40:00.072183 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:00.072187 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:00.072191 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:00.072194 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:00.072198 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:00.072202 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:00.072205 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:00.072209 | orchestrator | 2025-08-29 14:40:00.072213 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-08-29 14:40:00.072216 | orchestrator | Friday 29 August 2025 14:39:56 +0000 (0:00:12.915) 0:00:20.477 ********* 2025-08-29 14:40:00.072220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:40:00.072224 | orchestrator | 2025-08-29 14:40:00.072228 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-08-29 14:40:00.072231 | orchestrator | Friday 29 August 2025 14:39:57 +0000 (0:00:01.383) 0:00:21.860 ********* 2025-08-29 14:40:00.072235 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:00.072239 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:00.072242 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:00.072246 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:00.072249 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:00.072253 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:00.072257 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:00.072260 | orchestrator | 2025-08-29 14:40:00.072264 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:40:00.072268 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:40:00.072283 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:40:00.072287 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:40:00.072291 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:40:00.072294 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:40:00.072298 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:40:00.072309 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:40:00.072315 | orchestrator | 2025-08-29 14:40:00.072321 | orchestrator | 2025-08-29 14:40:00.072330 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:40:00.072343 | orchestrator | Friday 29 August 2025 14:39:59 +0000 (0:00:01.918) 0:00:23.779 ********* 2025-08-29 14:40:00.072350 | orchestrator | =============================================================================== 2025-08-29 14:40:00.072356 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.92s 2025-08-29 14:40:00.072362 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.09s 2025-08-29 14:40:00.072368 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2025-08-29 14:40:00.072373 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.38s 2025-08-29 14:40:00.072425 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.25s 2025-08-29 14:40:00.072432 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.19s 2025-08-29 14:40:00.072438 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.17s 2025-08-29 14:40:00.072444 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.83s 2025-08-29 14:40:00.072451 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.74s 2025-08-29 14:40:00.365454 | orchestrator | ++ semver latest 7.1.1 2025-08-29 14:40:00.419900 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 14:40:00.419996 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 14:40:00.420013 | orchestrator | + sudo systemctl restart manager.service 2025-08-29 14:40:14.077009 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 14:40:14.077097 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 14:40:14.077106 | orchestrator | + local max_attempts=60 2025-08-29 14:40:14.077114 | orchestrator | + local name=ceph-ansible 2025-08-29 14:40:14.077120 | orchestrator | + local attempt_num=1 2025-08-29 14:40:14.077126 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:40:14.118809 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:40:14.118921 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:40:14.118944 | orchestrator | + sleep 5 2025-08-29 14:40:19.124847 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:40:19.157249 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:40:19.157338 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:40:19.157385 | orchestrator | + sleep 5 2025-08-29 14:40:24.160753 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:40:24.190173 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:40:24.190233 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:40:24.190244 | orchestrator | + sleep 5 2025-08-29 14:40:29.193742 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:40:29.232377 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:40:29.232436 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:40:29.232449 | orchestrator | + sleep 5 2025-08-29 14:40:34.236869 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:40:34.273024 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:40:34.273117 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:40:34.273131 | orchestrator | + sleep 5 2025-08-29 14:40:39.277980 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:40:39.315190 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:40:39.315302 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:40:39.315400 | orchestrator | + sleep 5 2025-08-29 14:40:44.321116 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:40:44.359153 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:40:44.359242 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:40:44.359256 | orchestrator | + sleep 5 2025-08-29 14:40:49.364265 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:40:49.407594 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:40:49.407706 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:40:49.407723 | orchestrator | + sleep 5 2025-08-29 14:40:54.410124 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:40:54.457724 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:40:54.457807 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:40:54.457822 | orchestrator | + sleep 5 2025-08-29 14:40:59.460570 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:40:59.500341 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:40:59.500426 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:40:59.500441 | orchestrator | + sleep 5 2025-08-29 14:41:04.505748 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:04.545646 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:04.545775 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:04.545799 | orchestrator | + sleep 5 2025-08-29 14:41:09.549775 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:09.587652 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:09.587741 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:09.587757 | orchestrator | + sleep 5 2025-08-29 14:41:14.591876 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:14.631163 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:14.631288 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:41:14.631304 | orchestrator | + sleep 5 2025-08-29 14:41:19.634924 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:41:19.673608 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:19.673691 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 14:41:19.673705 | orchestrator | + local max_attempts=60 2025-08-29 14:41:19.673717 | orchestrator | + local name=kolla-ansible 2025-08-29 14:41:19.673728 | orchestrator | + local attempt_num=1 2025-08-29 14:41:19.674782 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 14:41:19.707296 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:19.707380 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 14:41:19.707394 | orchestrator | + local max_attempts=60 2025-08-29 14:41:19.707406 | orchestrator | + local name=osism-ansible 2025-08-29 14:41:19.707417 | orchestrator | + local attempt_num=1 2025-08-29 14:41:19.707765 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 14:41:19.739798 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:41:19.739858 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 14:41:19.739872 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 14:41:19.892632 | orchestrator | ARA in ceph-ansible already disabled. 2025-08-29 14:41:20.038992 | orchestrator | ARA in kolla-ansible already disabled. 2025-08-29 14:41:20.191487 | orchestrator | ARA in osism-ansible already disabled. 2025-08-29 14:41:20.316664 | orchestrator | ARA in osism-kubernetes already disabled. 2025-08-29 14:41:20.316909 | orchestrator | + osism apply gather-facts 2025-08-29 14:41:32.418895 | orchestrator | 2025-08-29 14:41:32 | INFO  | Task db0f4954-eff2-4cd6-b3be-2363a9e480e1 (gather-facts) was prepared for execution. 2025-08-29 14:41:32.419080 | orchestrator | 2025-08-29 14:41:32 | INFO  | It takes a moment until task db0f4954-eff2-4cd6-b3be-2363a9e480e1 (gather-facts) has been started and output is visible here. 2025-08-29 14:41:45.302148 | orchestrator | 2025-08-29 14:41:45.302260 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:41:45.302274 | orchestrator | 2025-08-29 14:41:45.302282 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:41:45.302290 | orchestrator | Friday 29 August 2025 14:41:36 +0000 (0:00:00.213) 0:00:00.213 ********* 2025-08-29 14:41:45.302298 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:41:45.302307 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:41:45.302315 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:41:45.302323 | orchestrator | ok: [testbed-manager] 2025-08-29 14:41:45.302349 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:41:45.302358 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:41:45.302365 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:41:45.302373 | orchestrator | 2025-08-29 14:41:45.302381 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 14:41:45.302389 | orchestrator | 2025-08-29 14:41:45.302397 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 14:41:45.302405 | orchestrator | Friday 29 August 2025 14:41:44 +0000 (0:00:08.207) 0:00:08.420 ********* 2025-08-29 14:41:45.302413 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:41:45.302421 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:41:45.302429 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:41:45.302436 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:41:45.302444 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:41:45.302452 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:41:45.302460 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:41:45.302467 | orchestrator | 2025-08-29 14:41:45.302475 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:41:45.302483 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:41:45.302492 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:41:45.302500 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:41:45.302507 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:41:45.302515 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:41:45.302523 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:41:45.302531 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:41:45.302538 | orchestrator | 2025-08-29 14:41:45.302546 | orchestrator | 2025-08-29 14:41:45.302554 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:41:45.302562 | orchestrator | Friday 29 August 2025 14:41:45 +0000 (0:00:00.458) 0:00:08.879 ********* 2025-08-29 14:41:45.302570 | orchestrator | =============================================================================== 2025-08-29 14:41:45.302578 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.21s 2025-08-29 14:41:45.302586 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-08-29 14:41:45.566894 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-08-29 14:41:45.585530 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-08-29 14:41:45.603112 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-08-29 14:41:45.618881 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-08-29 14:41:45.640446 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-08-29 14:41:45.658720 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-08-29 14:41:45.676084 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-08-29 14:41:45.692983 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-08-29 14:41:45.713954 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-08-29 14:41:45.733530 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-08-29 14:41:45.755672 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-08-29 14:41:45.775846 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-08-29 14:41:45.793176 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-08-29 14:41:45.812150 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-08-29 14:41:45.828644 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-08-29 14:41:45.840970 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-08-29 14:41:45.857697 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-08-29 14:41:45.872745 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-08-29 14:41:45.885311 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-08-29 14:41:45.901371 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-08-29 14:41:45.916593 | orchestrator | + [[ false == \t\r\u\e ]] 2025-08-29 14:41:46.027048 | orchestrator | ok: Runtime: 0:23:29.681292 2025-08-29 14:41:46.134230 | 2025-08-29 14:41:46.134562 | TASK [Deploy services] 2025-08-29 14:41:46.681469 | orchestrator | skipping: Conditional result was False 2025-08-29 14:41:46.700152 | 2025-08-29 14:41:46.700321 | TASK [Deploy in a nutshell] 2025-08-29 14:41:47.412543 | orchestrator | 2025-08-29 14:41:47.412745 | orchestrator | # PULL IMAGES 2025-08-29 14:41:47.412769 | orchestrator | + set -e 2025-08-29 14:41:47.412789 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 14:41:47.412811 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 14:41:47.412826 | orchestrator | ++ INTERACTIVE=false 2025-08-29 14:41:47.412839 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 14:41:47.412887 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 14:41:47.412909 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 14:41:47.412923 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 14:41:47.412941 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 14:41:47.412954 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 14:41:47.412972 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 14:41:47.412984 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 14:41:47.413002 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 14:41:47.413013 | orchestrator | ++ export MANAGER_VERSION=latest 2025-08-29 14:41:47.413027 | orchestrator | ++ MANAGER_VERSION=latest 2025-08-29 14:41:47.413039 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 14:41:47.413051 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 14:41:47.413062 | orchestrator | ++ export ARA=false 2025-08-29 14:41:47.413074 | orchestrator | ++ ARA=false 2025-08-29 14:41:47.413085 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 14:41:47.413096 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 14:41:47.413107 | orchestrator | ++ export TEMPEST=false 2025-08-29 14:41:47.413118 | orchestrator | ++ TEMPEST=false 2025-08-29 14:41:47.413128 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 14:41:47.413139 | orchestrator | ++ IS_ZUUL=true 2025-08-29 14:41:47.413150 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.249 2025-08-29 14:41:47.413162 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.249 2025-08-29 14:41:47.413172 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 14:41:47.413183 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 14:41:47.413243 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 14:41:47.413255 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 14:41:47.413267 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 14:41:47.413277 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 14:41:47.413289 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 14:41:47.413307 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 14:41:47.413318 | orchestrator | + echo 2025-08-29 14:41:47.413330 | orchestrator | + echo '# PULL IMAGES' 2025-08-29 14:41:47.413341 | orchestrator | 2025-08-29 14:41:47.413352 | orchestrator | + echo 2025-08-29 14:41:47.413567 | orchestrator | ++ semver latest 7.0.0 2025-08-29 14:41:47.468684 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 14:41:47.468774 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 14:41:47.468786 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-08-29 14:41:49.460699 | orchestrator | 2025-08-29 14:41:49 | INFO  | Trying to run play pull-images in environment custom 2025-08-29 14:41:59.610915 | orchestrator | 2025-08-29 14:41:59 | INFO  | Task 51a1ee45-7630-4c99-8623-26b0050ad496 (pull-images) was prepared for execution. 2025-08-29 14:41:59.611049 | orchestrator | 2025-08-29 14:41:59 | INFO  | Task 51a1ee45-7630-4c99-8623-26b0050ad496 is running in background. No more output. Check ARA for logs. 2025-08-29 14:42:01.853998 | orchestrator | 2025-08-29 14:42:01 | INFO  | Trying to run play wipe-partitions in environment custom 2025-08-29 14:42:12.004830 | orchestrator | 2025-08-29 14:42:12 | INFO  | Task 3302679c-42a1-4033-b91a-b62717644246 (wipe-partitions) was prepared for execution. 2025-08-29 14:42:12.004957 | orchestrator | 2025-08-29 14:42:12 | INFO  | It takes a moment until task 3302679c-42a1-4033-b91a-b62717644246 (wipe-partitions) has been started and output is visible here. 2025-08-29 14:42:25.055399 | orchestrator | 2025-08-29 14:42:25.055495 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-08-29 14:42:25.055511 | orchestrator | 2025-08-29 14:42:25.055523 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-08-29 14:42:25.055541 | orchestrator | Friday 29 August 2025 14:42:17 +0000 (0:00:00.106) 0:00:00.106 ********* 2025-08-29 14:42:25.055555 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:42:25.055568 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:42:25.055579 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:42:25.055590 | orchestrator | 2025-08-29 14:42:25.055602 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-08-29 14:42:25.055636 | orchestrator | Friday 29 August 2025 14:42:17 +0000 (0:00:00.529) 0:00:00.635 ********* 2025-08-29 14:42:25.055649 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:42:25.055660 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:42:25.055675 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:42:25.055686 | orchestrator | 2025-08-29 14:42:25.055698 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-08-29 14:42:25.055709 | orchestrator | Friday 29 August 2025 14:42:17 +0000 (0:00:00.215) 0:00:00.851 ********* 2025-08-29 14:42:25.055721 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:42:25.055733 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:42:25.055744 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:42:25.055755 | orchestrator | 2025-08-29 14:42:25.055766 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-08-29 14:42:25.055778 | orchestrator | Friday 29 August 2025 14:42:18 +0000 (0:00:00.644) 0:00:01.495 ********* 2025-08-29 14:42:25.055789 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:42:25.055801 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:42:25.055811 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:42:25.055822 | orchestrator | 2025-08-29 14:42:25.055833 | orchestrator | TASK [Check device availability] *********************************************** 2025-08-29 14:42:25.055844 | orchestrator | Friday 29 August 2025 14:42:18 +0000 (0:00:00.229) 0:00:01.724 ********* 2025-08-29 14:42:25.055862 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 14:42:25.055887 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 14:42:25.055900 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 14:42:25.055911 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 14:42:25.055922 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 14:42:25.055933 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 14:42:25.055945 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 14:42:25.055958 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 14:42:25.055971 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 14:42:25.055983 | orchestrator | 2025-08-29 14:42:25.055996 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-08-29 14:42:25.056009 | orchestrator | Friday 29 August 2025 14:42:19 +0000 (0:00:01.150) 0:00:02.875 ********* 2025-08-29 14:42:25.056022 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 14:42:25.056035 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 14:42:25.056047 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 14:42:25.056059 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 14:42:25.056071 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 14:42:25.056084 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 14:42:25.056096 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 14:42:25.056109 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 14:42:25.056121 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 14:42:25.056133 | orchestrator | 2025-08-29 14:42:25.056168 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-08-29 14:42:25.056181 | orchestrator | Friday 29 August 2025 14:42:21 +0000 (0:00:01.329) 0:00:04.204 ********* 2025-08-29 14:42:25.056193 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 14:42:25.056205 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 14:42:25.056218 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 14:42:25.056230 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 14:42:25.056243 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 14:42:25.056261 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 14:42:25.056274 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 14:42:25.056297 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 14:42:25.056309 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 14:42:25.056320 | orchestrator | 2025-08-29 14:42:25.056331 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-08-29 14:42:25.056342 | orchestrator | Friday 29 August 2025 14:42:23 +0000 (0:00:02.442) 0:00:06.647 ********* 2025-08-29 14:42:25.056353 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:42:25.056364 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:42:25.056374 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:42:25.056385 | orchestrator | 2025-08-29 14:42:25.056396 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-08-29 14:42:25.056407 | orchestrator | Friday 29 August 2025 14:42:24 +0000 (0:00:00.569) 0:00:07.216 ********* 2025-08-29 14:42:25.056418 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:42:25.056429 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:42:25.056440 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:42:25.056450 | orchestrator | 2025-08-29 14:42:25.056461 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:42:25.056474 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:25.056486 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:25.056512 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:25.056524 | orchestrator | 2025-08-29 14:42:25.056535 | orchestrator | 2025-08-29 14:42:25.056546 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:42:25.056557 | orchestrator | Friday 29 August 2025 14:42:24 +0000 (0:00:00.611) 0:00:07.828 ********* 2025-08-29 14:42:25.056568 | orchestrator | =============================================================================== 2025-08-29 14:42:25.056579 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.44s 2025-08-29 14:42:25.056590 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-08-29 14:42:25.056601 | orchestrator | Check device availability ----------------------------------------------- 1.15s 2025-08-29 14:42:25.056612 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.64s 2025-08-29 14:42:25.056622 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-08-29 14:42:25.056633 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2025-08-29 14:42:25.056644 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.53s 2025-08-29 14:42:25.056655 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-08-29 14:42:25.056666 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2025-08-29 14:42:36.942397 | orchestrator | 2025-08-29 14:42:36 | INFO  | Task 44b65473-c7d3-48f2-8c95-6f47ef5e11a7 (facts) was prepared for execution. 2025-08-29 14:42:36.942515 | orchestrator | 2025-08-29 14:42:36 | INFO  | It takes a moment until task 44b65473-c7d3-48f2-8c95-6f47ef5e11a7 (facts) has been started and output is visible here. 2025-08-29 14:42:49.427042 | orchestrator | 2025-08-29 14:42:49.427201 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 14:42:49.427220 | orchestrator | 2025-08-29 14:42:49.427233 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 14:42:49.427245 | orchestrator | Friday 29 August 2025 14:42:41 +0000 (0:00:00.286) 0:00:00.286 ********* 2025-08-29 14:42:49.427257 | orchestrator | ok: [testbed-manager] 2025-08-29 14:42:49.427269 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:42:49.427280 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:42:49.427322 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:42:49.427334 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:42:49.427344 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:42:49.427355 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:42:49.427365 | orchestrator | 2025-08-29 14:42:49.427378 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 14:42:49.427389 | orchestrator | Friday 29 August 2025 14:42:42 +0000 (0:00:01.104) 0:00:01.391 ********* 2025-08-29 14:42:49.427400 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:42:49.427411 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:42:49.427422 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:42:49.427433 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:42:49.427443 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:42:49.427454 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:42:49.427464 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:42:49.427475 | orchestrator | 2025-08-29 14:42:49.427486 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:42:49.427497 | orchestrator | 2025-08-29 14:42:49.427507 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:42:49.427518 | orchestrator | Friday 29 August 2025 14:42:43 +0000 (0:00:01.238) 0:00:02.630 ********* 2025-08-29 14:42:49.427528 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:42:49.427539 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:42:49.427551 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:42:49.427561 | orchestrator | ok: [testbed-manager] 2025-08-29 14:42:49.427574 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:42:49.427586 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:42:49.427598 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:42:49.427610 | orchestrator | 2025-08-29 14:42:49.427621 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 14:42:49.427634 | orchestrator | 2025-08-29 14:42:49.427645 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 14:42:49.427675 | orchestrator | Friday 29 August 2025 14:42:48 +0000 (0:00:04.618) 0:00:07.249 ********* 2025-08-29 14:42:49.427694 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:42:49.427713 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:42:49.427730 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:42:49.427749 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:42:49.427769 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:42:49.427787 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:42:49.427806 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:42:49.427825 | orchestrator | 2025-08-29 14:42:49.427844 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:42:49.427857 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:49.427870 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:49.427883 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:49.427895 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:49.427907 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:49.427920 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:49.427932 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:49.427943 | orchestrator | 2025-08-29 14:42:49.427964 | orchestrator | 2025-08-29 14:42:49.427975 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:42:49.427985 | orchestrator | Friday 29 August 2025 14:42:48 +0000 (0:00:00.789) 0:00:08.038 ********* 2025-08-29 14:42:49.427996 | orchestrator | =============================================================================== 2025-08-29 14:42:49.428007 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.62s 2025-08-29 14:42:49.428018 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-08-29 14:42:49.428028 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2025-08-29 14:42:49.428039 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.79s 2025-08-29 14:42:51.811414 | orchestrator | 2025-08-29 14:42:51 | INFO  | Task 41342102-bf5e-4d8b-a9e3-8495167a1835 (ceph-configure-lvm-volumes) was prepared for execution. 2025-08-29 14:42:51.811535 | orchestrator | 2025-08-29 14:42:51 | INFO  | It takes a moment until task 41342102-bf5e-4d8b-a9e3-8495167a1835 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-08-29 14:43:04.912349 | orchestrator | 2025-08-29 14:43:04.912495 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 14:43:04.912512 | orchestrator | 2025-08-29 14:43:04.912523 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:43:04.912538 | orchestrator | Friday 29 August 2025 14:42:56 +0000 (0:00:00.375) 0:00:00.375 ********* 2025-08-29 14:43:04.912551 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 14:43:04.912562 | orchestrator | 2025-08-29 14:43:04.912574 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:43:04.912585 | orchestrator | Friday 29 August 2025 14:42:56 +0000 (0:00:00.273) 0:00:00.649 ********* 2025-08-29 14:43:04.912595 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:04.912608 | orchestrator | 2025-08-29 14:43:04.912619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.912630 | orchestrator | Friday 29 August 2025 14:42:56 +0000 (0:00:00.287) 0:00:00.936 ********* 2025-08-29 14:43:04.912641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:43:04.912653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:43:04.912664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:43:04.912675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:43:04.912686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:43:04.912696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:43:04.912707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:43:04.912717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:43:04.912728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 14:43:04.912739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:43:04.912750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:43:04.912770 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:43:04.912781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:43:04.912792 | orchestrator | 2025-08-29 14:43:04.912802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.912813 | orchestrator | Friday 29 August 2025 14:42:57 +0000 (0:00:00.397) 0:00:01.334 ********* 2025-08-29 14:43:04.912824 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.912861 | orchestrator | 2025-08-29 14:43:04.912873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.912883 | orchestrator | Friday 29 August 2025 14:42:57 +0000 (0:00:00.630) 0:00:01.964 ********* 2025-08-29 14:43:04.912894 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.912905 | orchestrator | 2025-08-29 14:43:04.912916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.912926 | orchestrator | Friday 29 August 2025 14:42:58 +0000 (0:00:00.215) 0:00:02.180 ********* 2025-08-29 14:43:04.912938 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.912948 | orchestrator | 2025-08-29 14:43:04.912959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.912970 | orchestrator | Friday 29 August 2025 14:42:58 +0000 (0:00:00.217) 0:00:02.397 ********* 2025-08-29 14:43:04.912981 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.912996 | orchestrator | 2025-08-29 14:43:04.913006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.913017 | orchestrator | Friday 29 August 2025 14:42:58 +0000 (0:00:00.207) 0:00:02.604 ********* 2025-08-29 14:43:04.913028 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.913039 | orchestrator | 2025-08-29 14:43:04.913050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.913061 | orchestrator | Friday 29 August 2025 14:42:58 +0000 (0:00:00.226) 0:00:02.831 ********* 2025-08-29 14:43:04.913072 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.913083 | orchestrator | 2025-08-29 14:43:04.913093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.913150 | orchestrator | Friday 29 August 2025 14:42:59 +0000 (0:00:00.198) 0:00:03.030 ********* 2025-08-29 14:43:04.913163 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.913174 | orchestrator | 2025-08-29 14:43:04.913185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.913196 | orchestrator | Friday 29 August 2025 14:42:59 +0000 (0:00:00.230) 0:00:03.260 ********* 2025-08-29 14:43:04.913206 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.913217 | orchestrator | 2025-08-29 14:43:04.913228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.913238 | orchestrator | Friday 29 August 2025 14:42:59 +0000 (0:00:00.223) 0:00:03.484 ********* 2025-08-29 14:43:04.913250 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a) 2025-08-29 14:43:04.913262 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a) 2025-08-29 14:43:04.913273 | orchestrator | 2025-08-29 14:43:04.913284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.913295 | orchestrator | Friday 29 August 2025 14:42:59 +0000 (0:00:00.462) 0:00:03.946 ********* 2025-08-29 14:43:04.913325 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b5eca971-d360-4d10-a7ea-637f4b5fbeee) 2025-08-29 14:43:04.913336 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b5eca971-d360-4d10-a7ea-637f4b5fbeee) 2025-08-29 14:43:04.913347 | orchestrator | 2025-08-29 14:43:04.913358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.913369 | orchestrator | Friday 29 August 2025 14:43:00 +0000 (0:00:00.484) 0:00:04.430 ********* 2025-08-29 14:43:04.913380 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8530debd-d017-4f26-8837-9c6ea90d3888) 2025-08-29 14:43:04.913391 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8530debd-d017-4f26-8837-9c6ea90d3888) 2025-08-29 14:43:04.913401 | orchestrator | 2025-08-29 14:43:04.913412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.913423 | orchestrator | Friday 29 August 2025 14:43:01 +0000 (0:00:00.942) 0:00:05.372 ********* 2025-08-29 14:43:04.913434 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_994268b6-638b-49ae-9337-a5a883f2caf6) 2025-08-29 14:43:04.913452 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_994268b6-638b-49ae-9337-a5a883f2caf6) 2025-08-29 14:43:04.913463 | orchestrator | 2025-08-29 14:43:04.913474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:04.913485 | orchestrator | Friday 29 August 2025 14:43:02 +0000 (0:00:00.687) 0:00:06.060 ********* 2025-08-29 14:43:04.913496 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:43:04.913506 | orchestrator | 2025-08-29 14:43:04.913517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:04.913534 | orchestrator | Friday 29 August 2025 14:43:02 +0000 (0:00:00.825) 0:00:06.886 ********* 2025-08-29 14:43:04.913545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:43:04.913556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:43:04.913567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:43:04.913577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:43:04.913588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:43:04.913599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:43:04.913609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:43:04.913620 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:43:04.913630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 14:43:04.913641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:43:04.913651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:43:04.913662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:43:04.913673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:43:04.913683 | orchestrator | 2025-08-29 14:43:04.913694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:04.913705 | orchestrator | Friday 29 August 2025 14:43:03 +0000 (0:00:00.361) 0:00:07.247 ********* 2025-08-29 14:43:04.913716 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.913726 | orchestrator | 2025-08-29 14:43:04.913737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:04.913748 | orchestrator | Friday 29 August 2025 14:43:03 +0000 (0:00:00.198) 0:00:07.446 ********* 2025-08-29 14:43:04.913759 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.913769 | orchestrator | 2025-08-29 14:43:04.913780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:04.913791 | orchestrator | Friday 29 August 2025 14:43:03 +0000 (0:00:00.234) 0:00:07.681 ********* 2025-08-29 14:43:04.913801 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.913812 | orchestrator | 2025-08-29 14:43:04.913823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:04.913833 | orchestrator | Friday 29 August 2025 14:43:03 +0000 (0:00:00.202) 0:00:07.883 ********* 2025-08-29 14:43:04.913844 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.913854 | orchestrator | 2025-08-29 14:43:04.913865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:04.913876 | orchestrator | Friday 29 August 2025 14:43:04 +0000 (0:00:00.202) 0:00:08.085 ********* 2025-08-29 14:43:04.913887 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.913897 | orchestrator | 2025-08-29 14:43:04.913915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:04.913926 | orchestrator | Friday 29 August 2025 14:43:04 +0000 (0:00:00.197) 0:00:08.283 ********* 2025-08-29 14:43:04.913937 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.913947 | orchestrator | 2025-08-29 14:43:04.913958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:04.913969 | orchestrator | Friday 29 August 2025 14:43:04 +0000 (0:00:00.202) 0:00:08.485 ********* 2025-08-29 14:43:04.913979 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:04.913990 | orchestrator | 2025-08-29 14:43:04.914001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:04.914012 | orchestrator | Friday 29 August 2025 14:43:04 +0000 (0:00:00.207) 0:00:08.692 ********* 2025-08-29 14:43:04.914097 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.640262 | orchestrator | 2025-08-29 14:43:12.640410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:12.640429 | orchestrator | Friday 29 August 2025 14:43:04 +0000 (0:00:00.191) 0:00:08.883 ********* 2025-08-29 14:43:12.640442 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 14:43:12.640455 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 14:43:12.640467 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 14:43:12.640478 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 14:43:12.640489 | orchestrator | 2025-08-29 14:43:12.640500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:12.640511 | orchestrator | Friday 29 August 2025 14:43:05 +0000 (0:00:01.059) 0:00:09.943 ********* 2025-08-29 14:43:12.640523 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.640534 | orchestrator | 2025-08-29 14:43:12.640544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:12.640556 | orchestrator | Friday 29 August 2025 14:43:06 +0000 (0:00:00.206) 0:00:10.149 ********* 2025-08-29 14:43:12.640567 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.640577 | orchestrator | 2025-08-29 14:43:12.640588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:12.640599 | orchestrator | Friday 29 August 2025 14:43:06 +0000 (0:00:00.177) 0:00:10.326 ********* 2025-08-29 14:43:12.640610 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.640621 | orchestrator | 2025-08-29 14:43:12.640632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:12.640643 | orchestrator | Friday 29 August 2025 14:43:06 +0000 (0:00:00.211) 0:00:10.538 ********* 2025-08-29 14:43:12.640654 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.640665 | orchestrator | 2025-08-29 14:43:12.640676 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 14:43:12.640687 | orchestrator | Friday 29 August 2025 14:43:06 +0000 (0:00:00.205) 0:00:10.743 ********* 2025-08-29 14:43:12.640697 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-08-29 14:43:12.640709 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-08-29 14:43:12.640720 | orchestrator | 2025-08-29 14:43:12.640731 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 14:43:12.640741 | orchestrator | Friday 29 August 2025 14:43:06 +0000 (0:00:00.185) 0:00:10.928 ********* 2025-08-29 14:43:12.640777 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.640789 | orchestrator | 2025-08-29 14:43:12.640800 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 14:43:12.640811 | orchestrator | Friday 29 August 2025 14:43:07 +0000 (0:00:00.135) 0:00:11.064 ********* 2025-08-29 14:43:12.640822 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.640832 | orchestrator | 2025-08-29 14:43:12.640844 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 14:43:12.640855 | orchestrator | Friday 29 August 2025 14:43:07 +0000 (0:00:00.133) 0:00:11.198 ********* 2025-08-29 14:43:12.640865 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.640900 | orchestrator | 2025-08-29 14:43:12.640912 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 14:43:12.640923 | orchestrator | Friday 29 August 2025 14:43:07 +0000 (0:00:00.142) 0:00:11.340 ********* 2025-08-29 14:43:12.640934 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:12.640945 | orchestrator | 2025-08-29 14:43:12.640955 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 14:43:12.640966 | orchestrator | Friday 29 August 2025 14:43:07 +0000 (0:00:00.132) 0:00:11.472 ********* 2025-08-29 14:43:12.640978 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '73f6d854-e6b6-54de-b399-c089d2858352'}}) 2025-08-29 14:43:12.640989 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b0db6b07-6be9-5d1b-9597-ea455233b3a1'}}) 2025-08-29 14:43:12.641000 | orchestrator | 2025-08-29 14:43:12.641011 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 14:43:12.641021 | orchestrator | Friday 29 August 2025 14:43:07 +0000 (0:00:00.172) 0:00:11.645 ********* 2025-08-29 14:43:12.641033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '73f6d854-e6b6-54de-b399-c089d2858352'}})  2025-08-29 14:43:12.641053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b0db6b07-6be9-5d1b-9597-ea455233b3a1'}})  2025-08-29 14:43:12.641064 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.641075 | orchestrator | 2025-08-29 14:43:12.641086 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 14:43:12.641097 | orchestrator | Friday 29 August 2025 14:43:07 +0000 (0:00:00.136) 0:00:11.781 ********* 2025-08-29 14:43:12.641108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '73f6d854-e6b6-54de-b399-c089d2858352'}})  2025-08-29 14:43:12.641140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b0db6b07-6be9-5d1b-9597-ea455233b3a1'}})  2025-08-29 14:43:12.641151 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.641162 | orchestrator | 2025-08-29 14:43:12.641172 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 14:43:12.641183 | orchestrator | Friday 29 August 2025 14:43:08 +0000 (0:00:00.430) 0:00:12.212 ********* 2025-08-29 14:43:12.641194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '73f6d854-e6b6-54de-b399-c089d2858352'}})  2025-08-29 14:43:12.641205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b0db6b07-6be9-5d1b-9597-ea455233b3a1'}})  2025-08-29 14:43:12.641216 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.641227 | orchestrator | 2025-08-29 14:43:12.641258 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 14:43:12.641270 | orchestrator | Friday 29 August 2025 14:43:08 +0000 (0:00:00.172) 0:00:12.384 ********* 2025-08-29 14:43:12.641281 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:12.641292 | orchestrator | 2025-08-29 14:43:12.641303 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 14:43:12.641320 | orchestrator | Friday 29 August 2025 14:43:08 +0000 (0:00:00.140) 0:00:12.525 ********* 2025-08-29 14:43:12.641331 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:12.641342 | orchestrator | 2025-08-29 14:43:12.641353 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 14:43:12.641364 | orchestrator | Friday 29 August 2025 14:43:08 +0000 (0:00:00.133) 0:00:12.658 ********* 2025-08-29 14:43:12.641374 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.641385 | orchestrator | 2025-08-29 14:43:12.641396 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 14:43:12.641407 | orchestrator | Friday 29 August 2025 14:43:08 +0000 (0:00:00.134) 0:00:12.793 ********* 2025-08-29 14:43:12.641417 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.641428 | orchestrator | 2025-08-29 14:43:12.641447 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 14:43:12.641458 | orchestrator | Friday 29 August 2025 14:43:08 +0000 (0:00:00.126) 0:00:12.919 ********* 2025-08-29 14:43:12.641469 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.641480 | orchestrator | 2025-08-29 14:43:12.641490 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 14:43:12.641501 | orchestrator | Friday 29 August 2025 14:43:09 +0000 (0:00:00.126) 0:00:13.046 ********* 2025-08-29 14:43:12.641513 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:43:12.641523 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:43:12.641534 | orchestrator |  "sdb": { 2025-08-29 14:43:12.641546 | orchestrator |  "osd_lvm_uuid": "73f6d854-e6b6-54de-b399-c089d2858352" 2025-08-29 14:43:12.641557 | orchestrator |  }, 2025-08-29 14:43:12.641568 | orchestrator |  "sdc": { 2025-08-29 14:43:12.641579 | orchestrator |  "osd_lvm_uuid": "b0db6b07-6be9-5d1b-9597-ea455233b3a1" 2025-08-29 14:43:12.641590 | orchestrator |  } 2025-08-29 14:43:12.641601 | orchestrator |  } 2025-08-29 14:43:12.641612 | orchestrator | } 2025-08-29 14:43:12.641624 | orchestrator | 2025-08-29 14:43:12.641634 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 14:43:12.641645 | orchestrator | Friday 29 August 2025 14:43:09 +0000 (0:00:00.141) 0:00:13.187 ********* 2025-08-29 14:43:12.641656 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.641667 | orchestrator | 2025-08-29 14:43:12.641678 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 14:43:12.641689 | orchestrator | Friday 29 August 2025 14:43:09 +0000 (0:00:00.155) 0:00:13.343 ********* 2025-08-29 14:43:12.641699 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.641710 | orchestrator | 2025-08-29 14:43:12.641721 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 14:43:12.641732 | orchestrator | Friday 29 August 2025 14:43:09 +0000 (0:00:00.131) 0:00:13.475 ********* 2025-08-29 14:43:12.641743 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:12.641753 | orchestrator | 2025-08-29 14:43:12.641764 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 14:43:12.641775 | orchestrator | Friday 29 August 2025 14:43:09 +0000 (0:00:00.139) 0:00:13.615 ********* 2025-08-29 14:43:12.641786 | orchestrator | changed: [testbed-node-3] => { 2025-08-29 14:43:12.641797 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 14:43:12.641807 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:43:12.641818 | orchestrator |  "sdb": { 2025-08-29 14:43:12.641829 | orchestrator |  "osd_lvm_uuid": "73f6d854-e6b6-54de-b399-c089d2858352" 2025-08-29 14:43:12.641840 | orchestrator |  }, 2025-08-29 14:43:12.641851 | orchestrator |  "sdc": { 2025-08-29 14:43:12.641862 | orchestrator |  "osd_lvm_uuid": "b0db6b07-6be9-5d1b-9597-ea455233b3a1" 2025-08-29 14:43:12.641873 | orchestrator |  } 2025-08-29 14:43:12.641883 | orchestrator |  }, 2025-08-29 14:43:12.641894 | orchestrator |  "lvm_volumes": [ 2025-08-29 14:43:12.641905 | orchestrator |  { 2025-08-29 14:43:12.641916 | orchestrator |  "data": "osd-block-73f6d854-e6b6-54de-b399-c089d2858352", 2025-08-29 14:43:12.641927 | orchestrator |  "data_vg": "ceph-73f6d854-e6b6-54de-b399-c089d2858352" 2025-08-29 14:43:12.641938 | orchestrator |  }, 2025-08-29 14:43:12.641948 | orchestrator |  { 2025-08-29 14:43:12.641959 | orchestrator |  "data": "osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1", 2025-08-29 14:43:12.641970 | orchestrator |  "data_vg": "ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1" 2025-08-29 14:43:12.641981 | orchestrator |  } 2025-08-29 14:43:12.641992 | orchestrator |  ] 2025-08-29 14:43:12.642003 | orchestrator |  } 2025-08-29 14:43:12.642014 | orchestrator | } 2025-08-29 14:43:12.642097 | orchestrator | 2025-08-29 14:43:12.642125 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 14:43:12.642159 | orchestrator | Friday 29 August 2025 14:43:09 +0000 (0:00:00.220) 0:00:13.835 ********* 2025-08-29 14:43:12.642171 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 14:43:12.642222 | orchestrator | 2025-08-29 14:43:12.642233 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 14:43:12.642244 | orchestrator | 2025-08-29 14:43:12.642255 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:43:12.642266 | orchestrator | Friday 29 August 2025 14:43:12 +0000 (0:00:02.289) 0:00:16.125 ********* 2025-08-29 14:43:12.642276 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 14:43:12.642287 | orchestrator | 2025-08-29 14:43:12.642297 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:43:12.642308 | orchestrator | Friday 29 August 2025 14:43:12 +0000 (0:00:00.247) 0:00:16.372 ********* 2025-08-29 14:43:12.642319 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:43:12.642329 | orchestrator | 2025-08-29 14:43:12.642340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:12.642359 | orchestrator | Friday 29 August 2025 14:43:12 +0000 (0:00:00.237) 0:00:16.610 ********* 2025-08-29 14:43:20.317944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:43:20.318100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:43:20.318149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:43:20.318163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:43:20.318174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:43:20.318185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:43:20.318195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:43:20.318206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:43:20.318217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 14:43:20.318229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:43:20.318239 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:43:20.318250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:43:20.318261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:43:20.318276 | orchestrator | 2025-08-29 14:43:20.318288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:20.318301 | orchestrator | Friday 29 August 2025 14:43:13 +0000 (0:00:00.375) 0:00:16.985 ********* 2025-08-29 14:43:20.318312 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.318324 | orchestrator | 2025-08-29 14:43:20.318335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:20.318346 | orchestrator | Friday 29 August 2025 14:43:13 +0000 (0:00:00.210) 0:00:17.196 ********* 2025-08-29 14:43:20.318357 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.318367 | orchestrator | 2025-08-29 14:43:20.318378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:20.318389 | orchestrator | Friday 29 August 2025 14:43:13 +0000 (0:00:00.207) 0:00:17.404 ********* 2025-08-29 14:43:20.318400 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.318410 | orchestrator | 2025-08-29 14:43:20.318422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:20.318432 | orchestrator | Friday 29 August 2025 14:43:13 +0000 (0:00:00.205) 0:00:17.610 ********* 2025-08-29 14:43:20.318443 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.318481 | orchestrator | 2025-08-29 14:43:20.318495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:20.318507 | orchestrator | Friday 29 August 2025 14:43:13 +0000 (0:00:00.191) 0:00:17.801 ********* 2025-08-29 14:43:20.318519 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.318531 | orchestrator | 2025-08-29 14:43:20.318543 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:20.318555 | orchestrator | Friday 29 August 2025 14:43:14 +0000 (0:00:00.693) 0:00:18.495 ********* 2025-08-29 14:43:20.318567 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.318579 | orchestrator | 2025-08-29 14:43:20.318591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:20.318603 | orchestrator | Friday 29 August 2025 14:43:14 +0000 (0:00:00.195) 0:00:18.691 ********* 2025-08-29 14:43:20.318615 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.318628 | orchestrator | 2025-08-29 14:43:20.318656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:20.318668 | orchestrator | Friday 29 August 2025 14:43:14 +0000 (0:00:00.214) 0:00:18.906 ********* 2025-08-29 14:43:20.318680 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.318693 | orchestrator | 2025-08-29 14:43:20.318705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:20.318717 | orchestrator | Friday 29 August 2025 14:43:15 +0000 (0:00:00.186) 0:00:19.092 ********* 2025-08-29 14:43:20.318729 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026) 2025-08-29 14:43:20.318742 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026) 2025-08-29 14:43:20.318755 | orchestrator | 2025-08-29 14:43:20.318767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:20.318779 | orchestrator | Friday 29 August 2025 14:43:15 +0000 (0:00:00.419) 0:00:19.512 ********* 2025-08-29 14:43:20.318791 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cfe2d7b1-468c-4925-a2ef-e57e3e9904a6) 2025-08-29 14:43:20.318802 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cfe2d7b1-468c-4925-a2ef-e57e3e9904a6) 2025-08-29 14:43:20.318814 | orchestrator | 2025-08-29 14:43:20.318826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:20.318839 | orchestrator | Friday 29 August 2025 14:43:16 +0000 (0:00:00.524) 0:00:20.037 ********* 2025-08-29 14:43:20.318850 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2af32ac1-7951-4112-a429-d0343cb67ad9) 2025-08-29 14:43:20.318861 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2af32ac1-7951-4112-a429-d0343cb67ad9) 2025-08-29 14:43:20.318872 | orchestrator | 2025-08-29 14:43:20.318882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:20.318893 | orchestrator | Friday 29 August 2025 14:43:16 +0000 (0:00:00.479) 0:00:20.516 ********* 2025-08-29 14:43:20.318920 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e4cd25e5-e70d-493f-a3e8-ae6027cdfc98) 2025-08-29 14:43:20.318932 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e4cd25e5-e70d-493f-a3e8-ae6027cdfc98) 2025-08-29 14:43:20.318943 | orchestrator | 2025-08-29 14:43:20.318954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:20.318965 | orchestrator | Friday 29 August 2025 14:43:16 +0000 (0:00:00.432) 0:00:20.949 ********* 2025-08-29 14:43:20.318976 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:43:20.318987 | orchestrator | 2025-08-29 14:43:20.318997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:20.319008 | orchestrator | Friday 29 August 2025 14:43:17 +0000 (0:00:00.336) 0:00:21.285 ********* 2025-08-29 14:43:20.319019 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:43:20.319038 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:43:20.319048 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:43:20.319059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:43:20.319069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:43:20.319080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:43:20.319090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:43:20.319101 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:43:20.319112 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 14:43:20.319139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:43:20.319150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:43:20.319161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:43:20.319171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:43:20.319182 | orchestrator | 2025-08-29 14:43:20.319193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:20.319203 | orchestrator | Friday 29 August 2025 14:43:17 +0000 (0:00:00.398) 0:00:21.684 ********* 2025-08-29 14:43:20.319214 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.319225 | orchestrator | 2025-08-29 14:43:20.319235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:20.319246 | orchestrator | Friday 29 August 2025 14:43:17 +0000 (0:00:00.185) 0:00:21.870 ********* 2025-08-29 14:43:20.319257 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.319267 | orchestrator | 2025-08-29 14:43:20.319278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:20.319288 | orchestrator | Friday 29 August 2025 14:43:18 +0000 (0:00:00.509) 0:00:22.380 ********* 2025-08-29 14:43:20.319305 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.319316 | orchestrator | 2025-08-29 14:43:20.319327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:20.319337 | orchestrator | Friday 29 August 2025 14:43:18 +0000 (0:00:00.160) 0:00:22.541 ********* 2025-08-29 14:43:20.319348 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.319359 | orchestrator | 2025-08-29 14:43:20.319370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:20.319381 | orchestrator | Friday 29 August 2025 14:43:18 +0000 (0:00:00.174) 0:00:22.716 ********* 2025-08-29 14:43:20.319391 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.319402 | orchestrator | 2025-08-29 14:43:20.319413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:20.319423 | orchestrator | Friday 29 August 2025 14:43:18 +0000 (0:00:00.206) 0:00:22.923 ********* 2025-08-29 14:43:20.319434 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.319445 | orchestrator | 2025-08-29 14:43:20.319455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:20.319466 | orchestrator | Friday 29 August 2025 14:43:19 +0000 (0:00:00.240) 0:00:23.163 ********* 2025-08-29 14:43:20.319476 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.319487 | orchestrator | 2025-08-29 14:43:20.319498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:20.319508 | orchestrator | Friday 29 August 2025 14:43:19 +0000 (0:00:00.175) 0:00:23.339 ********* 2025-08-29 14:43:20.319519 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.319529 | orchestrator | 2025-08-29 14:43:20.319540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:20.319557 | orchestrator | Friday 29 August 2025 14:43:19 +0000 (0:00:00.168) 0:00:23.507 ********* 2025-08-29 14:43:20.319568 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 14:43:20.319579 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 14:43:20.319590 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 14:43:20.319601 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 14:43:20.319611 | orchestrator | 2025-08-29 14:43:20.319622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:20.319633 | orchestrator | Friday 29 August 2025 14:43:20 +0000 (0:00:00.632) 0:00:24.140 ********* 2025-08-29 14:43:20.319643 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:20.319654 | orchestrator | 2025-08-29 14:43:20.319671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:25.981773 | orchestrator | Friday 29 August 2025 14:43:20 +0000 (0:00:00.150) 0:00:24.290 ********* 2025-08-29 14:43:25.981878 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.981892 | orchestrator | 2025-08-29 14:43:25.981904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:25.981915 | orchestrator | Friday 29 August 2025 14:43:20 +0000 (0:00:00.132) 0:00:24.423 ********* 2025-08-29 14:43:25.981926 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.981937 | orchestrator | 2025-08-29 14:43:25.981947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:25.981958 | orchestrator | Friday 29 August 2025 14:43:20 +0000 (0:00:00.131) 0:00:24.554 ********* 2025-08-29 14:43:25.981968 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.981979 | orchestrator | 2025-08-29 14:43:25.981989 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 14:43:25.982000 | orchestrator | Friday 29 August 2025 14:43:20 +0000 (0:00:00.145) 0:00:24.699 ********* 2025-08-29 14:43:25.982011 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-08-29 14:43:25.982098 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-08-29 14:43:25.982119 | orchestrator | 2025-08-29 14:43:25.982190 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 14:43:25.982208 | orchestrator | Friday 29 August 2025 14:43:21 +0000 (0:00:00.309) 0:00:25.009 ********* 2025-08-29 14:43:25.982225 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.982244 | orchestrator | 2025-08-29 14:43:25.982261 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 14:43:25.982281 | orchestrator | Friday 29 August 2025 14:43:21 +0000 (0:00:00.100) 0:00:25.109 ********* 2025-08-29 14:43:25.982300 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.982319 | orchestrator | 2025-08-29 14:43:25.982332 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 14:43:25.982342 | orchestrator | Friday 29 August 2025 14:43:21 +0000 (0:00:00.114) 0:00:25.224 ********* 2025-08-29 14:43:25.982353 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.982363 | orchestrator | 2025-08-29 14:43:25.982374 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 14:43:25.982384 | orchestrator | Friday 29 August 2025 14:43:21 +0000 (0:00:00.131) 0:00:25.355 ********* 2025-08-29 14:43:25.982395 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:43:25.982406 | orchestrator | 2025-08-29 14:43:25.982417 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 14:43:25.982427 | orchestrator | Friday 29 August 2025 14:43:21 +0000 (0:00:00.127) 0:00:25.482 ********* 2025-08-29 14:43:25.982439 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8955e74f-f88a-5c8e-a869-5f490c143acc'}}) 2025-08-29 14:43:25.982450 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '76bc2ac4-c5cd-591d-a103-fddbd09e4373'}}) 2025-08-29 14:43:25.982460 | orchestrator | 2025-08-29 14:43:25.982471 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 14:43:25.982504 | orchestrator | Friday 29 August 2025 14:43:21 +0000 (0:00:00.150) 0:00:25.633 ********* 2025-08-29 14:43:25.982516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8955e74f-f88a-5c8e-a869-5f490c143acc'}})  2025-08-29 14:43:25.982528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '76bc2ac4-c5cd-591d-a103-fddbd09e4373'}})  2025-08-29 14:43:25.982539 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.982549 | orchestrator | 2025-08-29 14:43:25.982560 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 14:43:25.982570 | orchestrator | Friday 29 August 2025 14:43:21 +0000 (0:00:00.155) 0:00:25.789 ********* 2025-08-29 14:43:25.982600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8955e74f-f88a-5c8e-a869-5f490c143acc'}})  2025-08-29 14:43:25.982619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '76bc2ac4-c5cd-591d-a103-fddbd09e4373'}})  2025-08-29 14:43:25.982636 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.982654 | orchestrator | 2025-08-29 14:43:25.982669 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 14:43:25.982684 | orchestrator | Friday 29 August 2025 14:43:21 +0000 (0:00:00.137) 0:00:25.926 ********* 2025-08-29 14:43:25.982699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8955e74f-f88a-5c8e-a869-5f490c143acc'}})  2025-08-29 14:43:25.982716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '76bc2ac4-c5cd-591d-a103-fddbd09e4373'}})  2025-08-29 14:43:25.982733 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.982750 | orchestrator | 2025-08-29 14:43:25.982768 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 14:43:25.982786 | orchestrator | Friday 29 August 2025 14:43:22 +0000 (0:00:00.138) 0:00:26.065 ********* 2025-08-29 14:43:25.982804 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:43:25.982822 | orchestrator | 2025-08-29 14:43:25.982840 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 14:43:25.982860 | orchestrator | Friday 29 August 2025 14:43:22 +0000 (0:00:00.133) 0:00:26.198 ********* 2025-08-29 14:43:25.982878 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:43:25.982893 | orchestrator | 2025-08-29 14:43:25.982904 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 14:43:25.982914 | orchestrator | Friday 29 August 2025 14:43:22 +0000 (0:00:00.125) 0:00:26.324 ********* 2025-08-29 14:43:25.982925 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.982936 | orchestrator | 2025-08-29 14:43:25.982965 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 14:43:25.982976 | orchestrator | Friday 29 August 2025 14:43:22 +0000 (0:00:00.104) 0:00:26.428 ********* 2025-08-29 14:43:25.982987 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.982997 | orchestrator | 2025-08-29 14:43:25.983008 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 14:43:25.983018 | orchestrator | Friday 29 August 2025 14:43:22 +0000 (0:00:00.265) 0:00:26.694 ********* 2025-08-29 14:43:25.983029 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.983039 | orchestrator | 2025-08-29 14:43:25.983050 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 14:43:25.983060 | orchestrator | Friday 29 August 2025 14:43:22 +0000 (0:00:00.108) 0:00:26.802 ********* 2025-08-29 14:43:25.983071 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:43:25.983082 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:43:25.983092 | orchestrator |  "sdb": { 2025-08-29 14:43:25.983104 | orchestrator |  "osd_lvm_uuid": "8955e74f-f88a-5c8e-a869-5f490c143acc" 2025-08-29 14:43:25.983115 | orchestrator |  }, 2025-08-29 14:43:25.983196 | orchestrator |  "sdc": { 2025-08-29 14:43:25.983220 | orchestrator |  "osd_lvm_uuid": "76bc2ac4-c5cd-591d-a103-fddbd09e4373" 2025-08-29 14:43:25.983231 | orchestrator |  } 2025-08-29 14:43:25.983242 | orchestrator |  } 2025-08-29 14:43:25.983253 | orchestrator | } 2025-08-29 14:43:25.983269 | orchestrator | 2025-08-29 14:43:25.983287 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 14:43:25.983303 | orchestrator | Friday 29 August 2025 14:43:22 +0000 (0:00:00.110) 0:00:26.913 ********* 2025-08-29 14:43:25.983318 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.983335 | orchestrator | 2025-08-29 14:43:25.983354 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 14:43:25.983374 | orchestrator | Friday 29 August 2025 14:43:23 +0000 (0:00:00.099) 0:00:27.013 ********* 2025-08-29 14:43:25.983392 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.983409 | orchestrator | 2025-08-29 14:43:25.983420 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 14:43:25.983431 | orchestrator | Friday 29 August 2025 14:43:23 +0000 (0:00:00.121) 0:00:27.134 ********* 2025-08-29 14:43:25.983441 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:25.983452 | orchestrator | 2025-08-29 14:43:25.983462 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 14:43:25.983473 | orchestrator | Friday 29 August 2025 14:43:23 +0000 (0:00:00.130) 0:00:27.265 ********* 2025-08-29 14:43:25.983483 | orchestrator | changed: [testbed-node-4] => { 2025-08-29 14:43:25.983494 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 14:43:25.983504 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:43:25.983515 | orchestrator |  "sdb": { 2025-08-29 14:43:25.983525 | orchestrator |  "osd_lvm_uuid": "8955e74f-f88a-5c8e-a869-5f490c143acc" 2025-08-29 14:43:25.983536 | orchestrator |  }, 2025-08-29 14:43:25.983547 | orchestrator |  "sdc": { 2025-08-29 14:43:25.983557 | orchestrator |  "osd_lvm_uuid": "76bc2ac4-c5cd-591d-a103-fddbd09e4373" 2025-08-29 14:43:25.983568 | orchestrator |  } 2025-08-29 14:43:25.983578 | orchestrator |  }, 2025-08-29 14:43:25.983589 | orchestrator |  "lvm_volumes": [ 2025-08-29 14:43:25.983599 | orchestrator |  { 2025-08-29 14:43:25.983610 | orchestrator |  "data": "osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc", 2025-08-29 14:43:25.983621 | orchestrator |  "data_vg": "ceph-8955e74f-f88a-5c8e-a869-5f490c143acc" 2025-08-29 14:43:25.983632 | orchestrator |  }, 2025-08-29 14:43:25.983642 | orchestrator |  { 2025-08-29 14:43:25.983652 | orchestrator |  "data": "osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373", 2025-08-29 14:43:25.983663 | orchestrator |  "data_vg": "ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373" 2025-08-29 14:43:25.983673 | orchestrator |  } 2025-08-29 14:43:25.983684 | orchestrator |  ] 2025-08-29 14:43:25.983694 | orchestrator |  } 2025-08-29 14:43:25.983705 | orchestrator | } 2025-08-29 14:43:25.983716 | orchestrator | 2025-08-29 14:43:25.983726 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 14:43:25.983736 | orchestrator | Friday 29 August 2025 14:43:23 +0000 (0:00:00.191) 0:00:27.457 ********* 2025-08-29 14:43:25.983746 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 14:43:25.983755 | orchestrator | 2025-08-29 14:43:25.983764 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 14:43:25.983773 | orchestrator | 2025-08-29 14:43:25.983783 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:43:25.983792 | orchestrator | Friday 29 August 2025 14:43:24 +0000 (0:00:01.106) 0:00:28.564 ********* 2025-08-29 14:43:25.983802 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 14:43:25.983811 | orchestrator | 2025-08-29 14:43:25.983820 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:43:25.983830 | orchestrator | Friday 29 August 2025 14:43:25 +0000 (0:00:00.468) 0:00:29.033 ********* 2025-08-29 14:43:25.983846 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:43:25.983856 | orchestrator | 2025-08-29 14:43:25.983865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:25.983875 | orchestrator | Friday 29 August 2025 14:43:25 +0000 (0:00:00.572) 0:00:29.605 ********* 2025-08-29 14:43:25.983891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:43:25.983901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:43:25.983910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:43:25.983920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:43:25.983929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:43:25.983938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:43:25.983955 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:43:34.092485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:43:34.092553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 14:43:34.092559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:43:34.092564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:43:34.092569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:43:34.092574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:43:34.092579 | orchestrator | 2025-08-29 14:43:34.092584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:34.092590 | orchestrator | Friday 29 August 2025 14:43:25 +0000 (0:00:00.336) 0:00:29.941 ********* 2025-08-29 14:43:34.092595 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.092600 | orchestrator | 2025-08-29 14:43:34.092604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:34.092609 | orchestrator | Friday 29 August 2025 14:43:26 +0000 (0:00:00.181) 0:00:30.123 ********* 2025-08-29 14:43:34.092614 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.092618 | orchestrator | 2025-08-29 14:43:34.092623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:34.092627 | orchestrator | Friday 29 August 2025 14:43:26 +0000 (0:00:00.192) 0:00:30.316 ********* 2025-08-29 14:43:34.092632 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.092636 | orchestrator | 2025-08-29 14:43:34.092641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:34.092645 | orchestrator | Friday 29 August 2025 14:43:26 +0000 (0:00:00.181) 0:00:30.497 ********* 2025-08-29 14:43:34.092650 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.092654 | orchestrator | 2025-08-29 14:43:34.092659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:34.092664 | orchestrator | Friday 29 August 2025 14:43:26 +0000 (0:00:00.173) 0:00:30.671 ********* 2025-08-29 14:43:34.092668 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.092673 | orchestrator | 2025-08-29 14:43:34.092677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:34.092682 | orchestrator | Friday 29 August 2025 14:43:26 +0000 (0:00:00.174) 0:00:30.845 ********* 2025-08-29 14:43:34.092686 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.092691 | orchestrator | 2025-08-29 14:43:34.092696 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:34.092700 | orchestrator | Friday 29 August 2025 14:43:27 +0000 (0:00:00.169) 0:00:31.015 ********* 2025-08-29 14:43:34.092705 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.092737 | orchestrator | 2025-08-29 14:43:34.092742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:34.092747 | orchestrator | Friday 29 August 2025 14:43:27 +0000 (0:00:00.160) 0:00:31.175 ********* 2025-08-29 14:43:34.092751 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.092756 | orchestrator | 2025-08-29 14:43:34.092761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:34.092765 | orchestrator | Friday 29 August 2025 14:43:27 +0000 (0:00:00.167) 0:00:31.342 ********* 2025-08-29 14:43:34.092770 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d) 2025-08-29 14:43:34.092776 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d) 2025-08-29 14:43:34.092781 | orchestrator | 2025-08-29 14:43:34.092785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:34.092790 | orchestrator | Friday 29 August 2025 14:43:27 +0000 (0:00:00.516) 0:00:31.859 ********* 2025-08-29 14:43:34.092794 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e2b81981-b087-421f-a1f1-ab20210f7cdd) 2025-08-29 14:43:34.092799 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e2b81981-b087-421f-a1f1-ab20210f7cdd) 2025-08-29 14:43:34.092803 | orchestrator | 2025-08-29 14:43:34.092808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:34.092812 | orchestrator | Friday 29 August 2025 14:43:28 +0000 (0:00:00.837) 0:00:32.697 ********* 2025-08-29 14:43:34.092817 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4ecbb96d-8085-4234-aac0-aef459b35ca9) 2025-08-29 14:43:34.092822 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4ecbb96d-8085-4234-aac0-aef459b35ca9) 2025-08-29 14:43:34.092826 | orchestrator | 2025-08-29 14:43:34.092830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:34.092835 | orchestrator | Friday 29 August 2025 14:43:29 +0000 (0:00:00.449) 0:00:33.146 ********* 2025-08-29 14:43:34.092840 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d88be871-1de9-4c4e-96cc-2f99c6f9bcd6) 2025-08-29 14:43:34.092844 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d88be871-1de9-4c4e-96cc-2f99c6f9bcd6) 2025-08-29 14:43:34.092849 | orchestrator | 2025-08-29 14:43:34.092853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:43:34.092858 | orchestrator | Friday 29 August 2025 14:43:29 +0000 (0:00:00.464) 0:00:33.611 ********* 2025-08-29 14:43:34.092862 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:43:34.092867 | orchestrator | 2025-08-29 14:43:34.092871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.092876 | orchestrator | Friday 29 August 2025 14:43:30 +0000 (0:00:00.469) 0:00:34.083 ********* 2025-08-29 14:43:34.092890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:43:34.092895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:43:34.092899 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:43:34.092904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:43:34.092908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:43:34.092913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:43:34.092917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:43:34.092922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:43:34.092927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 14:43:34.092948 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:43:34.092953 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:43:34.092957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:43:34.092962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:43:34.092966 | orchestrator | 2025-08-29 14:43:34.092971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.092975 | orchestrator | Friday 29 August 2025 14:43:30 +0000 (0:00:00.464) 0:00:34.547 ********* 2025-08-29 14:43:34.092980 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.092984 | orchestrator | 2025-08-29 14:43:34.092989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.092994 | orchestrator | Friday 29 August 2025 14:43:30 +0000 (0:00:00.237) 0:00:34.784 ********* 2025-08-29 14:43:34.092998 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.093003 | orchestrator | 2025-08-29 14:43:34.093007 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.093012 | orchestrator | Friday 29 August 2025 14:43:31 +0000 (0:00:00.222) 0:00:35.006 ********* 2025-08-29 14:43:34.093016 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.093021 | orchestrator | 2025-08-29 14:43:34.093027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.093033 | orchestrator | Friday 29 August 2025 14:43:31 +0000 (0:00:00.200) 0:00:35.207 ********* 2025-08-29 14:43:34.093041 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.093049 | orchestrator | 2025-08-29 14:43:34.093057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.093064 | orchestrator | Friday 29 August 2025 14:43:31 +0000 (0:00:00.262) 0:00:35.470 ********* 2025-08-29 14:43:34.093071 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.093078 | orchestrator | 2025-08-29 14:43:34.093086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.093093 | orchestrator | Friday 29 August 2025 14:43:31 +0000 (0:00:00.188) 0:00:35.659 ********* 2025-08-29 14:43:34.093101 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.093108 | orchestrator | 2025-08-29 14:43:34.093116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.093141 | orchestrator | Friday 29 August 2025 14:43:32 +0000 (0:00:00.480) 0:00:36.140 ********* 2025-08-29 14:43:34.093151 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.093160 | orchestrator | 2025-08-29 14:43:34.093169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.093178 | orchestrator | Friday 29 August 2025 14:43:32 +0000 (0:00:00.229) 0:00:36.370 ********* 2025-08-29 14:43:34.093187 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.093196 | orchestrator | 2025-08-29 14:43:34.093205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.093214 | orchestrator | Friday 29 August 2025 14:43:32 +0000 (0:00:00.215) 0:00:36.585 ********* 2025-08-29 14:43:34.093223 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 14:43:34.093231 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 14:43:34.093241 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 14:43:34.093248 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 14:43:34.093256 | orchestrator | 2025-08-29 14:43:34.093263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.093271 | orchestrator | Friday 29 August 2025 14:43:33 +0000 (0:00:00.658) 0:00:37.244 ********* 2025-08-29 14:43:34.093279 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.093287 | orchestrator | 2025-08-29 14:43:34.093296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.093311 | orchestrator | Friday 29 August 2025 14:43:33 +0000 (0:00:00.184) 0:00:37.428 ********* 2025-08-29 14:43:34.093319 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.093328 | orchestrator | 2025-08-29 14:43:34.093336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.093343 | orchestrator | Friday 29 August 2025 14:43:33 +0000 (0:00:00.204) 0:00:37.633 ********* 2025-08-29 14:43:34.093351 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.093358 | orchestrator | 2025-08-29 14:43:34.093365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:43:34.093373 | orchestrator | Friday 29 August 2025 14:43:33 +0000 (0:00:00.218) 0:00:37.851 ********* 2025-08-29 14:43:34.093380 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:34.093386 | orchestrator | 2025-08-29 14:43:34.093391 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 14:43:34.093401 | orchestrator | Friday 29 August 2025 14:43:34 +0000 (0:00:00.213) 0:00:38.065 ********* 2025-08-29 14:43:38.989374 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-08-29 14:43:38.989481 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-08-29 14:43:38.989496 | orchestrator | 2025-08-29 14:43:38.989509 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 14:43:38.989520 | orchestrator | Friday 29 August 2025 14:43:34 +0000 (0:00:00.193) 0:00:38.259 ********* 2025-08-29 14:43:38.989531 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:38.989542 | orchestrator | 2025-08-29 14:43:38.989553 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 14:43:38.989564 | orchestrator | Friday 29 August 2025 14:43:34 +0000 (0:00:00.222) 0:00:38.481 ********* 2025-08-29 14:43:38.989575 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:38.989586 | orchestrator | 2025-08-29 14:43:38.989597 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 14:43:38.989608 | orchestrator | Friday 29 August 2025 14:43:34 +0000 (0:00:00.165) 0:00:38.647 ********* 2025-08-29 14:43:38.989618 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:38.989629 | orchestrator | 2025-08-29 14:43:38.989640 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 14:43:38.989650 | orchestrator | Friday 29 August 2025 14:43:34 +0000 (0:00:00.125) 0:00:38.773 ********* 2025-08-29 14:43:38.989661 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:43:38.989673 | orchestrator | 2025-08-29 14:43:38.989684 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 14:43:38.989694 | orchestrator | Friday 29 August 2025 14:43:35 +0000 (0:00:00.414) 0:00:39.187 ********* 2025-08-29 14:43:38.989707 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'}}) 2025-08-29 14:43:38.989719 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'}}) 2025-08-29 14:43:38.989730 | orchestrator | 2025-08-29 14:43:38.989740 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 14:43:38.989751 | orchestrator | Friday 29 August 2025 14:43:35 +0000 (0:00:00.204) 0:00:39.392 ********* 2025-08-29 14:43:38.989763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'}})  2025-08-29 14:43:38.989775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'}})  2025-08-29 14:43:38.989786 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:38.989797 | orchestrator | 2025-08-29 14:43:38.989808 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 14:43:38.989819 | orchestrator | Friday 29 August 2025 14:43:35 +0000 (0:00:00.205) 0:00:39.598 ********* 2025-08-29 14:43:38.989830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'}})  2025-08-29 14:43:38.989865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'}})  2025-08-29 14:43:38.989876 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:38.989887 | orchestrator | 2025-08-29 14:43:38.989897 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 14:43:38.989908 | orchestrator | Friday 29 August 2025 14:43:35 +0000 (0:00:00.182) 0:00:39.781 ********* 2025-08-29 14:43:38.989919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'}})  2025-08-29 14:43:38.989930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'}})  2025-08-29 14:43:38.989941 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:38.989951 | orchestrator | 2025-08-29 14:43:38.989962 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 14:43:38.989972 | orchestrator | Friday 29 August 2025 14:43:35 +0000 (0:00:00.155) 0:00:39.937 ********* 2025-08-29 14:43:38.989983 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:43:38.989994 | orchestrator | 2025-08-29 14:43:38.990084 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 14:43:38.990099 | orchestrator | Friday 29 August 2025 14:43:36 +0000 (0:00:00.186) 0:00:40.124 ********* 2025-08-29 14:43:38.990110 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:43:38.990121 | orchestrator | 2025-08-29 14:43:38.990152 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 14:43:38.990164 | orchestrator | Friday 29 August 2025 14:43:36 +0000 (0:00:00.172) 0:00:40.297 ********* 2025-08-29 14:43:38.990175 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:38.990186 | orchestrator | 2025-08-29 14:43:38.990196 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 14:43:38.990207 | orchestrator | Friday 29 August 2025 14:43:36 +0000 (0:00:00.184) 0:00:40.481 ********* 2025-08-29 14:43:38.990218 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:38.990229 | orchestrator | 2025-08-29 14:43:38.990240 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 14:43:38.990251 | orchestrator | Friday 29 August 2025 14:43:36 +0000 (0:00:00.201) 0:00:40.682 ********* 2025-08-29 14:43:38.990261 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:38.990272 | orchestrator | 2025-08-29 14:43:38.990283 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 14:43:38.990293 | orchestrator | Friday 29 August 2025 14:43:36 +0000 (0:00:00.133) 0:00:40.816 ********* 2025-08-29 14:43:38.990304 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:43:38.990315 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:43:38.990326 | orchestrator |  "sdb": { 2025-08-29 14:43:38.990338 | orchestrator |  "osd_lvm_uuid": "dc8c4f7f-2eb1-5ff6-8642-584f5da1f281" 2025-08-29 14:43:38.990367 | orchestrator |  }, 2025-08-29 14:43:38.990378 | orchestrator |  "sdc": { 2025-08-29 14:43:38.990389 | orchestrator |  "osd_lvm_uuid": "74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde" 2025-08-29 14:43:38.990400 | orchestrator |  } 2025-08-29 14:43:38.990411 | orchestrator |  } 2025-08-29 14:43:38.990423 | orchestrator | } 2025-08-29 14:43:38.990434 | orchestrator | 2025-08-29 14:43:38.990445 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 14:43:38.990456 | orchestrator | Friday 29 August 2025 14:43:37 +0000 (0:00:00.168) 0:00:40.985 ********* 2025-08-29 14:43:38.990467 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:38.990478 | orchestrator | 2025-08-29 14:43:38.990488 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 14:43:38.990499 | orchestrator | Friday 29 August 2025 14:43:37 +0000 (0:00:00.134) 0:00:41.119 ********* 2025-08-29 14:43:38.990510 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:38.990521 | orchestrator | 2025-08-29 14:43:38.990532 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 14:43:38.990551 | orchestrator | Friday 29 August 2025 14:43:37 +0000 (0:00:00.421) 0:00:41.540 ********* 2025-08-29 14:43:38.990562 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:38.990573 | orchestrator | 2025-08-29 14:43:38.990584 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 14:43:38.990595 | orchestrator | Friday 29 August 2025 14:43:37 +0000 (0:00:00.145) 0:00:41.686 ********* 2025-08-29 14:43:38.990605 | orchestrator | changed: [testbed-node-5] => { 2025-08-29 14:43:38.990616 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 14:43:38.990627 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:43:38.990638 | orchestrator |  "sdb": { 2025-08-29 14:43:38.990649 | orchestrator |  "osd_lvm_uuid": "dc8c4f7f-2eb1-5ff6-8642-584f5da1f281" 2025-08-29 14:43:38.990660 | orchestrator |  }, 2025-08-29 14:43:38.990671 | orchestrator |  "sdc": { 2025-08-29 14:43:38.990682 | orchestrator |  "osd_lvm_uuid": "74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde" 2025-08-29 14:43:38.990692 | orchestrator |  } 2025-08-29 14:43:38.990703 | orchestrator |  }, 2025-08-29 14:43:38.990714 | orchestrator |  "lvm_volumes": [ 2025-08-29 14:43:38.990725 | orchestrator |  { 2025-08-29 14:43:38.990736 | orchestrator |  "data": "osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281", 2025-08-29 14:43:38.990746 | orchestrator |  "data_vg": "ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281" 2025-08-29 14:43:38.990757 | orchestrator |  }, 2025-08-29 14:43:38.990768 | orchestrator |  { 2025-08-29 14:43:38.990778 | orchestrator |  "data": "osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde", 2025-08-29 14:43:38.990789 | orchestrator |  "data_vg": "ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde" 2025-08-29 14:43:38.990800 | orchestrator |  } 2025-08-29 14:43:38.990811 | orchestrator |  ] 2025-08-29 14:43:38.990822 | orchestrator |  } 2025-08-29 14:43:38.990837 | orchestrator | } 2025-08-29 14:43:38.990848 | orchestrator | 2025-08-29 14:43:38.990859 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 14:43:38.990870 | orchestrator | Friday 29 August 2025 14:43:37 +0000 (0:00:00.224) 0:00:41.911 ********* 2025-08-29 14:43:38.990881 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 14:43:38.990892 | orchestrator | 2025-08-29 14:43:38.990902 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:43:38.990913 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 14:43:38.990931 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 14:43:38.990950 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 14:43:38.990968 | orchestrator | 2025-08-29 14:43:38.990985 | orchestrator | 2025-08-29 14:43:38.991003 | orchestrator | 2025-08-29 14:43:38.991020 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:43:38.991037 | orchestrator | Friday 29 August 2025 14:43:38 +0000 (0:00:01.029) 0:00:42.941 ********* 2025-08-29 14:43:38.991056 | orchestrator | =============================================================================== 2025-08-29 14:43:38.991073 | orchestrator | Write configuration file ------------------------------------------------ 4.43s 2025-08-29 14:43:38.991092 | orchestrator | Add known partitions to the list of available block devices ------------- 1.22s 2025-08-29 14:43:38.991110 | orchestrator | Add known links to the list of available block devices ------------------ 1.11s 2025-08-29 14:43:38.991128 | orchestrator | Get initial list of available block devices ----------------------------- 1.10s 2025-08-29 14:43:38.991170 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2025-08-29 14:43:38.991192 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.99s 2025-08-29 14:43:38.991203 | orchestrator | Add known links to the list of available block devices ------------------ 0.94s 2025-08-29 14:43:38.991214 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2025-08-29 14:43:38.991224 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2025-08-29 14:43:38.991235 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.75s 2025-08-29 14:43:38.991246 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-08-29 14:43:38.991256 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.69s 2025-08-29 14:43:38.991267 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-08-29 14:43:38.991277 | orchestrator | Print DB devices -------------------------------------------------------- 0.67s 2025-08-29 14:43:38.991297 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.67s 2025-08-29 14:43:39.432397 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-08-29 14:43:39.432472 | orchestrator | Print configuration data ------------------------------------------------ 0.64s 2025-08-29 14:43:39.432477 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-08-29 14:43:39.432481 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-08-29 14:43:39.432486 | orchestrator | Set WAL devices config data --------------------------------------------- 0.59s 2025-08-29 14:44:02.366082 | orchestrator | 2025-08-29 14:44:02 | INFO  | Task 6f155f25-275b-4af2-ab63-3b2f2a92d77f (sync inventory) is running in background. Output coming soon. 2025-08-29 14:44:29.319257 | orchestrator | 2025-08-29 14:44:03 | INFO  | Starting group_vars file reorganization 2025-08-29 14:44:29.319377 | orchestrator | 2025-08-29 14:44:03 | INFO  | Moved 0 file(s) to their respective directories 2025-08-29 14:44:29.319395 | orchestrator | 2025-08-29 14:44:03 | INFO  | Group_vars file reorganization completed 2025-08-29 14:44:29.319407 | orchestrator | 2025-08-29 14:44:05 | INFO  | Starting variable preparation from inventory 2025-08-29 14:44:29.319419 | orchestrator | 2025-08-29 14:44:09 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-08-29 14:44:29.319430 | orchestrator | 2025-08-29 14:44:09 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-08-29 14:44:29.319441 | orchestrator | 2025-08-29 14:44:09 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-08-29 14:44:29.319452 | orchestrator | 2025-08-29 14:44:09 | INFO  | 3 file(s) written, 6 host(s) processed 2025-08-29 14:44:29.319463 | orchestrator | 2025-08-29 14:44:09 | INFO  | Variable preparation completed 2025-08-29 14:44:29.319474 | orchestrator | 2025-08-29 14:44:10 | INFO  | Starting inventory overwrite handling 2025-08-29 14:44:29.319486 | orchestrator | 2025-08-29 14:44:10 | INFO  | Handling group overwrites in 99-overwrite 2025-08-29 14:44:29.319523 | orchestrator | 2025-08-29 14:44:10 | INFO  | Removing group frr:children from 60-generic 2025-08-29 14:44:29.319535 | orchestrator | 2025-08-29 14:44:10 | INFO  | Removing group storage:children from 50-kolla 2025-08-29 14:44:29.319546 | orchestrator | 2025-08-29 14:44:10 | INFO  | Removing group netbird:children from 50-infrastruture 2025-08-29 14:44:29.319557 | orchestrator | 2025-08-29 14:44:10 | INFO  | Removing group ceph-rgw from 50-ceph 2025-08-29 14:44:29.319569 | orchestrator | 2025-08-29 14:44:10 | INFO  | Removing group ceph-mds from 50-ceph 2025-08-29 14:44:29.319580 | orchestrator | 2025-08-29 14:44:10 | INFO  | Handling group overwrites in 20-roles 2025-08-29 14:44:29.319591 | orchestrator | 2025-08-29 14:44:10 | INFO  | Removing group k3s_node from 50-infrastruture 2025-08-29 14:44:29.319624 | orchestrator | 2025-08-29 14:44:10 | INFO  | Removed 6 group(s) in total 2025-08-29 14:44:29.319635 | orchestrator | 2025-08-29 14:44:10 | INFO  | Inventory overwrite handling completed 2025-08-29 14:44:29.319646 | orchestrator | 2025-08-29 14:44:11 | INFO  | Starting merge of inventory files 2025-08-29 14:44:29.319657 | orchestrator | 2025-08-29 14:44:11 | INFO  | Inventory files merged successfully 2025-08-29 14:44:29.319668 | orchestrator | 2025-08-29 14:44:16 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-08-29 14:44:29.319679 | orchestrator | 2025-08-29 14:44:28 | INFO  | Successfully wrote ClusterShell configuration 2025-08-29 14:44:29.319690 | orchestrator | [master 3c97ded] 2025-08-29-14-44 2025-08-29 14:44:29.319703 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-08-29 14:44:31.769914 | orchestrator | 2025-08-29 14:44:31 | INFO  | Task 6622d0a9-6691-40eb-ba11-e80f844139f2 (ceph-create-lvm-devices) was prepared for execution. 2025-08-29 14:44:31.770012 | orchestrator | 2025-08-29 14:44:31 | INFO  | It takes a moment until task 6622d0a9-6691-40eb-ba11-e80f844139f2 (ceph-create-lvm-devices) has been started and output is visible here. 2025-08-29 14:44:42.644362 | orchestrator | 2025-08-29 14:44:42.644440 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 14:44:42.644447 | orchestrator | 2025-08-29 14:44:42.644452 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:44:42.644458 | orchestrator | Friday 29 August 2025 14:44:35 +0000 (0:00:00.283) 0:00:00.283 ********* 2025-08-29 14:44:42.644463 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 14:44:42.644468 | orchestrator | 2025-08-29 14:44:42.644472 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:44:42.644477 | orchestrator | Friday 29 August 2025 14:44:36 +0000 (0:00:00.222) 0:00:00.506 ********* 2025-08-29 14:44:42.644481 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:44:42.644486 | orchestrator | 2025-08-29 14:44:42.644491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644495 | orchestrator | Friday 29 August 2025 14:44:36 +0000 (0:00:00.213) 0:00:00.720 ********* 2025-08-29 14:44:42.644499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:44:42.644505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:44:42.644509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:44:42.644514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:44:42.644518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:44:42.644522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:44:42.644526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:44:42.644530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:44:42.644534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 14:44:42.644538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:44:42.644542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:44:42.644547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:44:42.644551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:44:42.644555 | orchestrator | 2025-08-29 14:44:42.644559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644581 | orchestrator | Friday 29 August 2025 14:44:36 +0000 (0:00:00.339) 0:00:01.059 ********* 2025-08-29 14:44:42.644585 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.644590 | orchestrator | 2025-08-29 14:44:42.644594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644598 | orchestrator | Friday 29 August 2025 14:44:37 +0000 (0:00:00.353) 0:00:01.413 ********* 2025-08-29 14:44:42.644602 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.644606 | orchestrator | 2025-08-29 14:44:42.644610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644614 | orchestrator | Friday 29 August 2025 14:44:37 +0000 (0:00:00.182) 0:00:01.596 ********* 2025-08-29 14:44:42.644618 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.644623 | orchestrator | 2025-08-29 14:44:42.644627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644631 | orchestrator | Friday 29 August 2025 14:44:37 +0000 (0:00:00.198) 0:00:01.794 ********* 2025-08-29 14:44:42.644635 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.644639 | orchestrator | 2025-08-29 14:44:42.644643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644647 | orchestrator | Friday 29 August 2025 14:44:37 +0000 (0:00:00.192) 0:00:01.987 ********* 2025-08-29 14:44:42.644651 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.644655 | orchestrator | 2025-08-29 14:44:42.644659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644663 | orchestrator | Friday 29 August 2025 14:44:37 +0000 (0:00:00.178) 0:00:02.166 ********* 2025-08-29 14:44:42.644667 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.644671 | orchestrator | 2025-08-29 14:44:42.644676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644680 | orchestrator | Friday 29 August 2025 14:44:37 +0000 (0:00:00.194) 0:00:02.360 ********* 2025-08-29 14:44:42.644684 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.644688 | orchestrator | 2025-08-29 14:44:42.644692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644696 | orchestrator | Friday 29 August 2025 14:44:38 +0000 (0:00:00.179) 0:00:02.540 ********* 2025-08-29 14:44:42.644700 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.644704 | orchestrator | 2025-08-29 14:44:42.644708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644712 | orchestrator | Friday 29 August 2025 14:44:38 +0000 (0:00:00.177) 0:00:02.717 ********* 2025-08-29 14:44:42.644716 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a) 2025-08-29 14:44:42.644721 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a) 2025-08-29 14:44:42.644725 | orchestrator | 2025-08-29 14:44:42.644729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644733 | orchestrator | Friday 29 August 2025 14:44:38 +0000 (0:00:00.363) 0:00:03.081 ********* 2025-08-29 14:44:42.644747 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b5eca971-d360-4d10-a7ea-637f4b5fbeee) 2025-08-29 14:44:42.644752 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b5eca971-d360-4d10-a7ea-637f4b5fbeee) 2025-08-29 14:44:42.644756 | orchestrator | 2025-08-29 14:44:42.644760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644764 | orchestrator | Friday 29 August 2025 14:44:39 +0000 (0:00:00.420) 0:00:03.502 ********* 2025-08-29 14:44:42.644768 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8530debd-d017-4f26-8837-9c6ea90d3888) 2025-08-29 14:44:42.644773 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8530debd-d017-4f26-8837-9c6ea90d3888) 2025-08-29 14:44:42.644777 | orchestrator | 2025-08-29 14:44:42.644781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644789 | orchestrator | Friday 29 August 2025 14:44:39 +0000 (0:00:00.553) 0:00:04.056 ********* 2025-08-29 14:44:42.644793 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_994268b6-638b-49ae-9337-a5a883f2caf6) 2025-08-29 14:44:42.644797 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_994268b6-638b-49ae-9337-a5a883f2caf6) 2025-08-29 14:44:42.644801 | orchestrator | 2025-08-29 14:44:42.644805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:44:42.644809 | orchestrator | Friday 29 August 2025 14:44:40 +0000 (0:00:00.697) 0:00:04.754 ********* 2025-08-29 14:44:42.644813 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:44:42.644817 | orchestrator | 2025-08-29 14:44:42.644821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:42.644825 | orchestrator | Friday 29 August 2025 14:44:40 +0000 (0:00:00.299) 0:00:05.053 ********* 2025-08-29 14:44:42.644829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:44:42.644834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:44:42.644838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:44:42.644842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:44:42.644846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:44:42.644850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:44:42.644854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:44:42.644858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:44:42.644874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 14:44:42.644878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:44:42.644882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:44:42.644886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:44:42.644893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:44:42.644897 | orchestrator | 2025-08-29 14:44:42.644901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:42.644905 | orchestrator | Friday 29 August 2025 14:44:41 +0000 (0:00:00.361) 0:00:05.414 ********* 2025-08-29 14:44:42.644909 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.644914 | orchestrator | 2025-08-29 14:44:42.644918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:42.644922 | orchestrator | Friday 29 August 2025 14:44:41 +0000 (0:00:00.215) 0:00:05.629 ********* 2025-08-29 14:44:42.644926 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.644931 | orchestrator | 2025-08-29 14:44:42.644935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:42.644940 | orchestrator | Friday 29 August 2025 14:44:41 +0000 (0:00:00.191) 0:00:05.820 ********* 2025-08-29 14:44:42.644944 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.644949 | orchestrator | 2025-08-29 14:44:42.644953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:42.644958 | orchestrator | Friday 29 August 2025 14:44:41 +0000 (0:00:00.203) 0:00:06.023 ********* 2025-08-29 14:44:42.644962 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.644967 | orchestrator | 2025-08-29 14:44:42.644972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:42.644979 | orchestrator | Friday 29 August 2025 14:44:41 +0000 (0:00:00.208) 0:00:06.232 ********* 2025-08-29 14:44:42.644984 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.644988 | orchestrator | 2025-08-29 14:44:42.644993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:42.644997 | orchestrator | Friday 29 August 2025 14:44:42 +0000 (0:00:00.207) 0:00:06.440 ********* 2025-08-29 14:44:42.645002 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.645006 | orchestrator | 2025-08-29 14:44:42.645011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:42.645015 | orchestrator | Friday 29 August 2025 14:44:42 +0000 (0:00:00.201) 0:00:06.641 ********* 2025-08-29 14:44:42.645020 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:42.645024 | orchestrator | 2025-08-29 14:44:42.645029 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:42.645033 | orchestrator | Friday 29 August 2025 14:44:42 +0000 (0:00:00.203) 0:00:06.845 ********* 2025-08-29 14:44:42.645040 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.988992 | orchestrator | 2025-08-29 14:44:50.989107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:50.989125 | orchestrator | Friday 29 August 2025 14:44:42 +0000 (0:00:00.195) 0:00:07.040 ********* 2025-08-29 14:44:50.989137 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 14:44:50.989150 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 14:44:50.989162 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 14:44:50.989173 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 14:44:50.989247 | orchestrator | 2025-08-29 14:44:50.989261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:50.989272 | orchestrator | Friday 29 August 2025 14:44:43 +0000 (0:00:01.189) 0:00:08.229 ********* 2025-08-29 14:44:50.989283 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.989294 | orchestrator | 2025-08-29 14:44:50.989305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:50.989316 | orchestrator | Friday 29 August 2025 14:44:44 +0000 (0:00:00.229) 0:00:08.459 ********* 2025-08-29 14:44:50.989327 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.989337 | orchestrator | 2025-08-29 14:44:50.989349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:50.989360 | orchestrator | Friday 29 August 2025 14:44:44 +0000 (0:00:00.247) 0:00:08.706 ********* 2025-08-29 14:44:50.989370 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.989381 | orchestrator | 2025-08-29 14:44:50.989393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:44:50.989405 | orchestrator | Friday 29 August 2025 14:44:44 +0000 (0:00:00.237) 0:00:08.944 ********* 2025-08-29 14:44:50.989418 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.989429 | orchestrator | 2025-08-29 14:44:50.989441 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 14:44:50.989453 | orchestrator | Friday 29 August 2025 14:44:44 +0000 (0:00:00.207) 0:00:09.152 ********* 2025-08-29 14:44:50.989465 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.989477 | orchestrator | 2025-08-29 14:44:50.989489 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 14:44:50.989501 | orchestrator | Friday 29 August 2025 14:44:44 +0000 (0:00:00.142) 0:00:09.294 ********* 2025-08-29 14:44:50.989514 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '73f6d854-e6b6-54de-b399-c089d2858352'}}) 2025-08-29 14:44:50.989527 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b0db6b07-6be9-5d1b-9597-ea455233b3a1'}}) 2025-08-29 14:44:50.989539 | orchestrator | 2025-08-29 14:44:50.989550 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 14:44:50.989561 | orchestrator | Friday 29 August 2025 14:44:45 +0000 (0:00:00.192) 0:00:09.486 ********* 2025-08-29 14:44:50.989573 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'}) 2025-08-29 14:44:50.989608 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'}) 2025-08-29 14:44:50.989620 | orchestrator | 2025-08-29 14:44:50.989631 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 14:44:50.989658 | orchestrator | Friday 29 August 2025 14:44:47 +0000 (0:00:01.954) 0:00:11.441 ********* 2025-08-29 14:44:50.989670 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:50.989683 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:44:50.989694 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.989704 | orchestrator | 2025-08-29 14:44:50.989715 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 14:44:50.989726 | orchestrator | Friday 29 August 2025 14:44:47 +0000 (0:00:00.216) 0:00:11.657 ********* 2025-08-29 14:44:50.989737 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'}) 2025-08-29 14:44:50.989748 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'}) 2025-08-29 14:44:50.989759 | orchestrator | 2025-08-29 14:44:50.989770 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 14:44:50.989781 | orchestrator | Friday 29 August 2025 14:44:48 +0000 (0:00:01.487) 0:00:13.145 ********* 2025-08-29 14:44:50.989792 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:50.989804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:44:50.989815 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.989826 | orchestrator | 2025-08-29 14:44:50.989837 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 14:44:50.989848 | orchestrator | Friday 29 August 2025 14:44:48 +0000 (0:00:00.158) 0:00:13.303 ********* 2025-08-29 14:44:50.989859 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.989869 | orchestrator | 2025-08-29 14:44:50.989880 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 14:44:50.989908 | orchestrator | Friday 29 August 2025 14:44:49 +0000 (0:00:00.147) 0:00:13.451 ********* 2025-08-29 14:44:50.989919 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:50.989931 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:44:50.989942 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.989952 | orchestrator | 2025-08-29 14:44:50.989963 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 14:44:50.989974 | orchestrator | Friday 29 August 2025 14:44:49 +0000 (0:00:00.402) 0:00:13.853 ********* 2025-08-29 14:44:50.989985 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.989995 | orchestrator | 2025-08-29 14:44:50.990006 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 14:44:50.990083 | orchestrator | Friday 29 August 2025 14:44:49 +0000 (0:00:00.132) 0:00:13.986 ********* 2025-08-29 14:44:50.990097 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:50.990118 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:44:50.990128 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.990139 | orchestrator | 2025-08-29 14:44:50.990150 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 14:44:50.990160 | orchestrator | Friday 29 August 2025 14:44:49 +0000 (0:00:00.157) 0:00:14.143 ********* 2025-08-29 14:44:50.990171 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.990203 | orchestrator | 2025-08-29 14:44:50.990223 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 14:44:50.990242 | orchestrator | Friday 29 August 2025 14:44:49 +0000 (0:00:00.150) 0:00:14.294 ********* 2025-08-29 14:44:50.990256 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:50.990267 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:44:50.990277 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.990288 | orchestrator | 2025-08-29 14:44:50.990298 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 14:44:50.990309 | orchestrator | Friday 29 August 2025 14:44:50 +0000 (0:00:00.154) 0:00:14.448 ********* 2025-08-29 14:44:50.990320 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:44:50.990330 | orchestrator | 2025-08-29 14:44:50.990341 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 14:44:50.990351 | orchestrator | Friday 29 August 2025 14:44:50 +0000 (0:00:00.138) 0:00:14.586 ********* 2025-08-29 14:44:50.990363 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:50.990374 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:44:50.990384 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.990395 | orchestrator | 2025-08-29 14:44:50.990405 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 14:44:50.990426 | orchestrator | Friday 29 August 2025 14:44:50 +0000 (0:00:00.154) 0:00:14.741 ********* 2025-08-29 14:44:50.990437 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:50.990448 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:44:50.990459 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.990469 | orchestrator | 2025-08-29 14:44:50.990480 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 14:44:50.990491 | orchestrator | Friday 29 August 2025 14:44:50 +0000 (0:00:00.150) 0:00:14.892 ********* 2025-08-29 14:44:50.990502 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:50.990512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:44:50.990523 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.990534 | orchestrator | 2025-08-29 14:44:50.990544 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 14:44:50.990555 | orchestrator | Friday 29 August 2025 14:44:50 +0000 (0:00:00.172) 0:00:15.065 ********* 2025-08-29 14:44:50.990565 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.990583 | orchestrator | 2025-08-29 14:44:50.990594 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 14:44:50.990604 | orchestrator | Friday 29 August 2025 14:44:50 +0000 (0:00:00.156) 0:00:15.222 ********* 2025-08-29 14:44:50.990615 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:50.990626 | orchestrator | 2025-08-29 14:44:50.990644 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 14:44:57.097970 | orchestrator | Friday 29 August 2025 14:44:50 +0000 (0:00:00.162) 0:00:15.384 ********* 2025-08-29 14:44:57.098151 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.098182 | orchestrator | 2025-08-29 14:44:57.098272 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 14:44:57.098295 | orchestrator | Friday 29 August 2025 14:44:51 +0000 (0:00:00.139) 0:00:15.523 ********* 2025-08-29 14:44:57.098314 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:44:57.098333 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 14:44:57.098351 | orchestrator | } 2025-08-29 14:44:57.098372 | orchestrator | 2025-08-29 14:44:57.098389 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 14:44:57.098409 | orchestrator | Friday 29 August 2025 14:44:51 +0000 (0:00:00.382) 0:00:15.906 ********* 2025-08-29 14:44:57.098427 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:44:57.098445 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 14:44:57.098464 | orchestrator | } 2025-08-29 14:44:57.098482 | orchestrator | 2025-08-29 14:44:57.098500 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 14:44:57.098519 | orchestrator | Friday 29 August 2025 14:44:51 +0000 (0:00:00.141) 0:00:16.047 ********* 2025-08-29 14:44:57.098539 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:44:57.098558 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 14:44:57.098577 | orchestrator | } 2025-08-29 14:44:57.098597 | orchestrator | 2025-08-29 14:44:57.098616 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 14:44:57.098634 | orchestrator | Friday 29 August 2025 14:44:51 +0000 (0:00:00.153) 0:00:16.200 ********* 2025-08-29 14:44:57.098653 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:44:57.098672 | orchestrator | 2025-08-29 14:44:57.098690 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 14:44:57.098711 | orchestrator | Friday 29 August 2025 14:44:52 +0000 (0:00:00.681) 0:00:16.881 ********* 2025-08-29 14:44:57.098729 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:44:57.098749 | orchestrator | 2025-08-29 14:44:57.098768 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 14:44:57.098787 | orchestrator | Friday 29 August 2025 14:44:52 +0000 (0:00:00.504) 0:00:17.386 ********* 2025-08-29 14:44:57.098805 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:44:57.098823 | orchestrator | 2025-08-29 14:44:57.098841 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 14:44:57.098860 | orchestrator | Friday 29 August 2025 14:44:53 +0000 (0:00:00.516) 0:00:17.903 ********* 2025-08-29 14:44:57.098880 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:44:57.098904 | orchestrator | 2025-08-29 14:44:57.098922 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 14:44:57.098943 | orchestrator | Friday 29 August 2025 14:44:53 +0000 (0:00:00.156) 0:00:18.059 ********* 2025-08-29 14:44:57.098962 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.098980 | orchestrator | 2025-08-29 14:44:57.098998 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 14:44:57.099017 | orchestrator | Friday 29 August 2025 14:44:53 +0000 (0:00:00.115) 0:00:18.175 ********* 2025-08-29 14:44:57.099037 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.099055 | orchestrator | 2025-08-29 14:44:57.099076 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 14:44:57.099093 | orchestrator | Friday 29 August 2025 14:44:53 +0000 (0:00:00.123) 0:00:18.299 ********* 2025-08-29 14:44:57.099139 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:44:57.099159 | orchestrator |  "vgs_report": { 2025-08-29 14:44:57.099220 | orchestrator |  "vg": [] 2025-08-29 14:44:57.099243 | orchestrator |  } 2025-08-29 14:44:57.099262 | orchestrator | } 2025-08-29 14:44:57.099280 | orchestrator | 2025-08-29 14:44:57.099300 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 14:44:57.099319 | orchestrator | Friday 29 August 2025 14:44:54 +0000 (0:00:00.153) 0:00:18.453 ********* 2025-08-29 14:44:57.099339 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.099359 | orchestrator | 2025-08-29 14:44:57.099377 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 14:44:57.099396 | orchestrator | Friday 29 August 2025 14:44:54 +0000 (0:00:00.151) 0:00:18.605 ********* 2025-08-29 14:44:57.099414 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.099433 | orchestrator | 2025-08-29 14:44:57.099457 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 14:44:57.099476 | orchestrator | Friday 29 August 2025 14:44:54 +0000 (0:00:00.156) 0:00:18.761 ********* 2025-08-29 14:44:57.099494 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.099513 | orchestrator | 2025-08-29 14:44:57.099531 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 14:44:57.099550 | orchestrator | Friday 29 August 2025 14:44:54 +0000 (0:00:00.385) 0:00:19.147 ********* 2025-08-29 14:44:57.099572 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.099590 | orchestrator | 2025-08-29 14:44:57.099608 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 14:44:57.099627 | orchestrator | Friday 29 August 2025 14:44:54 +0000 (0:00:00.118) 0:00:19.265 ********* 2025-08-29 14:44:57.099645 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.099665 | orchestrator | 2025-08-29 14:44:57.099685 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 14:44:57.099704 | orchestrator | Friday 29 August 2025 14:44:55 +0000 (0:00:00.151) 0:00:19.417 ********* 2025-08-29 14:44:57.099723 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.099742 | orchestrator | 2025-08-29 14:44:57.099759 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 14:44:57.099779 | orchestrator | Friday 29 August 2025 14:44:55 +0000 (0:00:00.125) 0:00:19.542 ********* 2025-08-29 14:44:57.099801 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.099821 | orchestrator | 2025-08-29 14:44:57.099841 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 14:44:57.099859 | orchestrator | Friday 29 August 2025 14:44:55 +0000 (0:00:00.119) 0:00:19.661 ********* 2025-08-29 14:44:57.099880 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.099900 | orchestrator | 2025-08-29 14:44:57.099920 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 14:44:57.099968 | orchestrator | Friday 29 August 2025 14:44:55 +0000 (0:00:00.116) 0:00:19.778 ********* 2025-08-29 14:44:57.099990 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.100009 | orchestrator | 2025-08-29 14:44:57.100028 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 14:44:57.100050 | orchestrator | Friday 29 August 2025 14:44:55 +0000 (0:00:00.115) 0:00:19.894 ********* 2025-08-29 14:44:57.100067 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.100086 | orchestrator | 2025-08-29 14:44:57.100104 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 14:44:57.100123 | orchestrator | Friday 29 August 2025 14:44:55 +0000 (0:00:00.119) 0:00:20.014 ********* 2025-08-29 14:44:57.100141 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.100159 | orchestrator | 2025-08-29 14:44:57.100180 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 14:44:57.100236 | orchestrator | Friday 29 August 2025 14:44:55 +0000 (0:00:00.132) 0:00:20.146 ********* 2025-08-29 14:44:57.100256 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.100275 | orchestrator | 2025-08-29 14:44:57.100309 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 14:44:57.100327 | orchestrator | Friday 29 August 2025 14:44:55 +0000 (0:00:00.129) 0:00:20.276 ********* 2025-08-29 14:44:57.100346 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.100364 | orchestrator | 2025-08-29 14:44:57.100383 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 14:44:57.100400 | orchestrator | Friday 29 August 2025 14:44:55 +0000 (0:00:00.115) 0:00:20.392 ********* 2025-08-29 14:44:57.100418 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.100437 | orchestrator | 2025-08-29 14:44:57.100455 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 14:44:57.100474 | orchestrator | Friday 29 August 2025 14:44:56 +0000 (0:00:00.118) 0:00:20.511 ********* 2025-08-29 14:44:57.100494 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:57.100514 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:44:57.100534 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.100552 | orchestrator | 2025-08-29 14:44:57.100570 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 14:44:57.100588 | orchestrator | Friday 29 August 2025 14:44:56 +0000 (0:00:00.311) 0:00:20.823 ********* 2025-08-29 14:44:57.100606 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:57.100625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:44:57.100645 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.100662 | orchestrator | 2025-08-29 14:44:57.100681 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 14:44:57.100701 | orchestrator | Friday 29 August 2025 14:44:56 +0000 (0:00:00.133) 0:00:20.956 ********* 2025-08-29 14:44:57.100721 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:57.100739 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:44:57.100762 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.100780 | orchestrator | 2025-08-29 14:44:57.100800 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 14:44:57.100817 | orchestrator | Friday 29 August 2025 14:44:56 +0000 (0:00:00.135) 0:00:21.091 ********* 2025-08-29 14:44:57.100836 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:57.100856 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:44:57.100876 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.100894 | orchestrator | 2025-08-29 14:44:57.100913 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 14:44:57.100933 | orchestrator | Friday 29 August 2025 14:44:56 +0000 (0:00:00.137) 0:00:21.228 ********* 2025-08-29 14:44:57.100955 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:57.100974 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:44:57.100992 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:57.101025 | orchestrator | 2025-08-29 14:44:57.101046 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 14:44:57.101064 | orchestrator | Friday 29 August 2025 14:44:56 +0000 (0:00:00.128) 0:00:21.357 ********* 2025-08-29 14:44:57.101082 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:44:57.101116 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:45:01.759403 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:01.759486 | orchestrator | 2025-08-29 14:45:01.759501 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 14:45:01.759512 | orchestrator | Friday 29 August 2025 14:44:57 +0000 (0:00:00.137) 0:00:21.494 ********* 2025-08-29 14:45:01.759538 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:45:01.759550 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:45:01.759560 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:01.759570 | orchestrator | 2025-08-29 14:45:01.759580 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 14:45:01.759590 | orchestrator | Friday 29 August 2025 14:44:57 +0000 (0:00:00.150) 0:00:21.645 ********* 2025-08-29 14:45:01.759599 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:45:01.759610 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:45:01.759620 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:01.759630 | orchestrator | 2025-08-29 14:45:01.759640 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 14:45:01.759650 | orchestrator | Friday 29 August 2025 14:44:57 +0000 (0:00:00.144) 0:00:21.790 ********* 2025-08-29 14:45:01.759660 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:01.759670 | orchestrator | 2025-08-29 14:45:01.759680 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 14:45:01.759689 | orchestrator | Friday 29 August 2025 14:44:57 +0000 (0:00:00.487) 0:00:22.277 ********* 2025-08-29 14:45:01.759699 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:01.759709 | orchestrator | 2025-08-29 14:45:01.759718 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 14:45:01.759728 | orchestrator | Friday 29 August 2025 14:44:58 +0000 (0:00:00.479) 0:00:22.757 ********* 2025-08-29 14:45:01.759738 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:01.759747 | orchestrator | 2025-08-29 14:45:01.759757 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 14:45:01.759766 | orchestrator | Friday 29 August 2025 14:44:58 +0000 (0:00:00.113) 0:00:22.870 ********* 2025-08-29 14:45:01.759776 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'vg_name': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'}) 2025-08-29 14:45:01.759787 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'vg_name': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'}) 2025-08-29 14:45:01.759797 | orchestrator | 2025-08-29 14:45:01.759810 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 14:45:01.759820 | orchestrator | Friday 29 August 2025 14:44:58 +0000 (0:00:00.144) 0:00:23.015 ********* 2025-08-29 14:45:01.759830 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:45:01.759858 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:45:01.759869 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:01.759878 | orchestrator | 2025-08-29 14:45:01.759888 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 14:45:01.759898 | orchestrator | Friday 29 August 2025 14:44:58 +0000 (0:00:00.263) 0:00:23.278 ********* 2025-08-29 14:45:01.759908 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:45:01.759918 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:45:01.759927 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:01.759937 | orchestrator | 2025-08-29 14:45:01.759946 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 14:45:01.759956 | orchestrator | Friday 29 August 2025 14:44:59 +0000 (0:00:00.135) 0:00:23.414 ********* 2025-08-29 14:45:01.759968 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'})  2025-08-29 14:45:01.759979 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'})  2025-08-29 14:45:01.759990 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:01.760001 | orchestrator | 2025-08-29 14:45:01.760013 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 14:45:01.760030 | orchestrator | Friday 29 August 2025 14:44:59 +0000 (0:00:00.139) 0:00:23.553 ********* 2025-08-29 14:45:01.760042 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:45:01.760053 | orchestrator |  "lvm_report": { 2025-08-29 14:45:01.760065 | orchestrator |  "lv": [ 2025-08-29 14:45:01.760076 | orchestrator |  { 2025-08-29 14:45:01.760101 | orchestrator |  "lv_name": "osd-block-73f6d854-e6b6-54de-b399-c089d2858352", 2025-08-29 14:45:01.760113 | orchestrator |  "vg_name": "ceph-73f6d854-e6b6-54de-b399-c089d2858352" 2025-08-29 14:45:01.760124 | orchestrator |  }, 2025-08-29 14:45:01.760135 | orchestrator |  { 2025-08-29 14:45:01.760146 | orchestrator |  "lv_name": "osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1", 2025-08-29 14:45:01.760156 | orchestrator |  "vg_name": "ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1" 2025-08-29 14:45:01.760167 | orchestrator |  } 2025-08-29 14:45:01.760177 | orchestrator |  ], 2025-08-29 14:45:01.760187 | orchestrator |  "pv": [ 2025-08-29 14:45:01.760216 | orchestrator |  { 2025-08-29 14:45:01.760228 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 14:45:01.760238 | orchestrator |  "vg_name": "ceph-73f6d854-e6b6-54de-b399-c089d2858352" 2025-08-29 14:45:01.760249 | orchestrator |  }, 2025-08-29 14:45:01.760259 | orchestrator |  { 2025-08-29 14:45:01.760270 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 14:45:01.760281 | orchestrator |  "vg_name": "ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1" 2025-08-29 14:45:01.760299 | orchestrator |  } 2025-08-29 14:45:01.760311 | orchestrator |  ] 2025-08-29 14:45:01.760322 | orchestrator |  } 2025-08-29 14:45:01.760333 | orchestrator | } 2025-08-29 14:45:01.760343 | orchestrator | 2025-08-29 14:45:01.760352 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 14:45:01.760362 | orchestrator | 2025-08-29 14:45:01.760377 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:45:01.760389 | orchestrator | Friday 29 August 2025 14:44:59 +0000 (0:00:00.256) 0:00:23.810 ********* 2025-08-29 14:45:01.760398 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 14:45:01.760415 | orchestrator | 2025-08-29 14:45:01.760425 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:45:01.760434 | orchestrator | Friday 29 August 2025 14:44:59 +0000 (0:00:00.234) 0:00:24.045 ********* 2025-08-29 14:45:01.760443 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:01.760453 | orchestrator | 2025-08-29 14:45:01.760462 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:01.760472 | orchestrator | Friday 29 August 2025 14:44:59 +0000 (0:00:00.253) 0:00:24.299 ********* 2025-08-29 14:45:01.760481 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:45:01.760490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:45:01.760500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:45:01.760509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:45:01.760518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:45:01.760528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:45:01.760537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:45:01.760551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:45:01.760561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 14:45:01.760570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:45:01.760580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:45:01.760589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:45:01.760599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:45:01.760608 | orchestrator | 2025-08-29 14:45:01.760617 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:01.760627 | orchestrator | Friday 29 August 2025 14:45:00 +0000 (0:00:00.341) 0:00:24.640 ********* 2025-08-29 14:45:01.760636 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:01.760645 | orchestrator | 2025-08-29 14:45:01.760655 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:01.760664 | orchestrator | Friday 29 August 2025 14:45:00 +0000 (0:00:00.165) 0:00:24.806 ********* 2025-08-29 14:45:01.760673 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:01.760683 | orchestrator | 2025-08-29 14:45:01.760692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:01.760702 | orchestrator | Friday 29 August 2025 14:45:00 +0000 (0:00:00.191) 0:00:24.998 ********* 2025-08-29 14:45:01.760711 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:01.760720 | orchestrator | 2025-08-29 14:45:01.760730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:01.760739 | orchestrator | Friday 29 August 2025 14:45:01 +0000 (0:00:00.458) 0:00:25.456 ********* 2025-08-29 14:45:01.760749 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:01.760758 | orchestrator | 2025-08-29 14:45:01.760767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:01.760777 | orchestrator | Friday 29 August 2025 14:45:01 +0000 (0:00:00.175) 0:00:25.632 ********* 2025-08-29 14:45:01.760786 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:01.760795 | orchestrator | 2025-08-29 14:45:01.760805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:01.760814 | orchestrator | Friday 29 August 2025 14:45:01 +0000 (0:00:00.182) 0:00:25.814 ********* 2025-08-29 14:45:01.760823 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:01.760833 | orchestrator | 2025-08-29 14:45:01.760848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:01.760857 | orchestrator | Friday 29 August 2025 14:45:01 +0000 (0:00:00.167) 0:00:25.982 ********* 2025-08-29 14:45:01.760867 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:01.760877 | orchestrator | 2025-08-29 14:45:01.760892 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:11.147734 | orchestrator | Friday 29 August 2025 14:45:01 +0000 (0:00:00.175) 0:00:26.158 ********* 2025-08-29 14:45:11.147835 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.147851 | orchestrator | 2025-08-29 14:45:11.147863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:11.147875 | orchestrator | Friday 29 August 2025 14:45:01 +0000 (0:00:00.168) 0:00:26.327 ********* 2025-08-29 14:45:11.147886 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026) 2025-08-29 14:45:11.147898 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026) 2025-08-29 14:45:11.147909 | orchestrator | 2025-08-29 14:45:11.147920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:11.147931 | orchestrator | Friday 29 August 2025 14:45:02 +0000 (0:00:00.355) 0:00:26.682 ********* 2025-08-29 14:45:11.147941 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cfe2d7b1-468c-4925-a2ef-e57e3e9904a6) 2025-08-29 14:45:11.147952 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cfe2d7b1-468c-4925-a2ef-e57e3e9904a6) 2025-08-29 14:45:11.147963 | orchestrator | 2025-08-29 14:45:11.147974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:11.147985 | orchestrator | Friday 29 August 2025 14:45:02 +0000 (0:00:00.390) 0:00:27.072 ********* 2025-08-29 14:45:11.147996 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2af32ac1-7951-4112-a429-d0343cb67ad9) 2025-08-29 14:45:11.148006 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2af32ac1-7951-4112-a429-d0343cb67ad9) 2025-08-29 14:45:11.148017 | orchestrator | 2025-08-29 14:45:11.148028 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:11.148038 | orchestrator | Friday 29 August 2025 14:45:03 +0000 (0:00:00.392) 0:00:27.464 ********* 2025-08-29 14:45:11.148049 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e4cd25e5-e70d-493f-a3e8-ae6027cdfc98) 2025-08-29 14:45:11.148060 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e4cd25e5-e70d-493f-a3e8-ae6027cdfc98) 2025-08-29 14:45:11.148070 | orchestrator | 2025-08-29 14:45:11.148081 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:11.148092 | orchestrator | Friday 29 August 2025 14:45:03 +0000 (0:00:00.416) 0:00:27.881 ********* 2025-08-29 14:45:11.148103 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:45:11.148113 | orchestrator | 2025-08-29 14:45:11.148124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148135 | orchestrator | Friday 29 August 2025 14:45:03 +0000 (0:00:00.301) 0:00:28.183 ********* 2025-08-29 14:45:11.148146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:45:11.148157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:45:11.148168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:45:11.148178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:45:11.148189 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:45:11.148222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:45:11.148233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:45:11.148266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:45:11.148279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 14:45:11.148291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:45:11.148303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:45:11.148315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:45:11.148326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:45:11.148339 | orchestrator | 2025-08-29 14:45:11.148366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148379 | orchestrator | Friday 29 August 2025 14:45:04 +0000 (0:00:00.608) 0:00:28.791 ********* 2025-08-29 14:45:11.148391 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.148403 | orchestrator | 2025-08-29 14:45:11.148415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148427 | orchestrator | Friday 29 August 2025 14:45:04 +0000 (0:00:00.197) 0:00:28.989 ********* 2025-08-29 14:45:11.148439 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.148452 | orchestrator | 2025-08-29 14:45:11.148464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148476 | orchestrator | Friday 29 August 2025 14:45:04 +0000 (0:00:00.186) 0:00:29.175 ********* 2025-08-29 14:45:11.148488 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.148500 | orchestrator | 2025-08-29 14:45:11.148512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148524 | orchestrator | Friday 29 August 2025 14:45:04 +0000 (0:00:00.182) 0:00:29.358 ********* 2025-08-29 14:45:11.148536 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.148548 | orchestrator | 2025-08-29 14:45:11.148577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148590 | orchestrator | Friday 29 August 2025 14:45:05 +0000 (0:00:00.178) 0:00:29.536 ********* 2025-08-29 14:45:11.148602 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.148614 | orchestrator | 2025-08-29 14:45:11.148625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148636 | orchestrator | Friday 29 August 2025 14:45:05 +0000 (0:00:00.184) 0:00:29.720 ********* 2025-08-29 14:45:11.148647 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.148658 | orchestrator | 2025-08-29 14:45:11.148669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148679 | orchestrator | Friday 29 August 2025 14:45:05 +0000 (0:00:00.162) 0:00:29.883 ********* 2025-08-29 14:45:11.148690 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.148701 | orchestrator | 2025-08-29 14:45:11.148711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148722 | orchestrator | Friday 29 August 2025 14:45:05 +0000 (0:00:00.168) 0:00:30.052 ********* 2025-08-29 14:45:11.148732 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.148743 | orchestrator | 2025-08-29 14:45:11.148754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148765 | orchestrator | Friday 29 August 2025 14:45:05 +0000 (0:00:00.179) 0:00:30.231 ********* 2025-08-29 14:45:11.148775 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 14:45:11.148786 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 14:45:11.148796 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 14:45:11.148807 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 14:45:11.148817 | orchestrator | 2025-08-29 14:45:11.148828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148839 | orchestrator | Friday 29 August 2025 14:45:06 +0000 (0:00:00.739) 0:00:30.970 ********* 2025-08-29 14:45:11.148858 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.148868 | orchestrator | 2025-08-29 14:45:11.148879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148890 | orchestrator | Friday 29 August 2025 14:45:06 +0000 (0:00:00.178) 0:00:31.149 ********* 2025-08-29 14:45:11.148901 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.148911 | orchestrator | 2025-08-29 14:45:11.148922 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148933 | orchestrator | Friday 29 August 2025 14:45:06 +0000 (0:00:00.197) 0:00:31.346 ********* 2025-08-29 14:45:11.148944 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.148954 | orchestrator | 2025-08-29 14:45:11.148965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:11.148976 | orchestrator | Friday 29 August 2025 14:45:07 +0000 (0:00:00.516) 0:00:31.863 ********* 2025-08-29 14:45:11.148986 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.148997 | orchestrator | 2025-08-29 14:45:11.149008 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 14:45:11.149019 | orchestrator | Friday 29 August 2025 14:45:07 +0000 (0:00:00.161) 0:00:32.024 ********* 2025-08-29 14:45:11.149034 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.149045 | orchestrator | 2025-08-29 14:45:11.149056 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 14:45:11.149067 | orchestrator | Friday 29 August 2025 14:45:07 +0000 (0:00:00.126) 0:00:32.151 ********* 2025-08-29 14:45:11.149078 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8955e74f-f88a-5c8e-a869-5f490c143acc'}}) 2025-08-29 14:45:11.149089 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '76bc2ac4-c5cd-591d-a103-fddbd09e4373'}}) 2025-08-29 14:45:11.149099 | orchestrator | 2025-08-29 14:45:11.149110 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 14:45:11.149121 | orchestrator | Friday 29 August 2025 14:45:07 +0000 (0:00:00.159) 0:00:32.310 ********* 2025-08-29 14:45:11.149132 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'}) 2025-08-29 14:45:11.149144 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'}) 2025-08-29 14:45:11.149154 | orchestrator | 2025-08-29 14:45:11.149165 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 14:45:11.149176 | orchestrator | Friday 29 August 2025 14:45:09 +0000 (0:00:01.835) 0:00:34.146 ********* 2025-08-29 14:45:11.149187 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:11.149233 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:11.149245 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:11.149256 | orchestrator | 2025-08-29 14:45:11.149267 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 14:45:11.149277 | orchestrator | Friday 29 August 2025 14:45:09 +0000 (0:00:00.131) 0:00:34.278 ********* 2025-08-29 14:45:11.149288 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'}) 2025-08-29 14:45:11.149299 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'}) 2025-08-29 14:45:11.149310 | orchestrator | 2025-08-29 14:45:11.149328 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 14:45:16.498418 | orchestrator | Friday 29 August 2025 14:45:11 +0000 (0:00:01.266) 0:00:35.544 ********* 2025-08-29 14:45:16.498585 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:16.498605 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:16.498617 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.498629 | orchestrator | 2025-08-29 14:45:16.498641 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 14:45:16.498652 | orchestrator | Friday 29 August 2025 14:45:11 +0000 (0:00:00.158) 0:00:35.703 ********* 2025-08-29 14:45:16.498663 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.498674 | orchestrator | 2025-08-29 14:45:16.498685 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 14:45:16.498696 | orchestrator | Friday 29 August 2025 14:45:11 +0000 (0:00:00.122) 0:00:35.825 ********* 2025-08-29 14:45:16.498708 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:16.498719 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:16.498730 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.498741 | orchestrator | 2025-08-29 14:45:16.498751 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 14:45:16.498762 | orchestrator | Friday 29 August 2025 14:45:11 +0000 (0:00:00.135) 0:00:35.961 ********* 2025-08-29 14:45:16.498773 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.498783 | orchestrator | 2025-08-29 14:45:16.498794 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 14:45:16.498804 | orchestrator | Friday 29 August 2025 14:45:11 +0000 (0:00:00.116) 0:00:36.077 ********* 2025-08-29 14:45:16.498815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:16.498826 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:16.498837 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.498848 | orchestrator | 2025-08-29 14:45:16.498859 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 14:45:16.498870 | orchestrator | Friday 29 August 2025 14:45:11 +0000 (0:00:00.155) 0:00:36.233 ********* 2025-08-29 14:45:16.498898 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.498909 | orchestrator | 2025-08-29 14:45:16.498920 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 14:45:16.498932 | orchestrator | Friday 29 August 2025 14:45:12 +0000 (0:00:00.260) 0:00:36.493 ********* 2025-08-29 14:45:16.498943 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:16.498956 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:16.498968 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.498979 | orchestrator | 2025-08-29 14:45:16.498991 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 14:45:16.499002 | orchestrator | Friday 29 August 2025 14:45:12 +0000 (0:00:00.132) 0:00:36.626 ********* 2025-08-29 14:45:16.499014 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:16.499027 | orchestrator | 2025-08-29 14:45:16.499038 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 14:45:16.499049 | orchestrator | Friday 29 August 2025 14:45:12 +0000 (0:00:00.132) 0:00:36.758 ********* 2025-08-29 14:45:16.499069 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:16.499082 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:16.499094 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.499106 | orchestrator | 2025-08-29 14:45:16.499118 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 14:45:16.499129 | orchestrator | Friday 29 August 2025 14:45:12 +0000 (0:00:00.138) 0:00:36.897 ********* 2025-08-29 14:45:16.499141 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:16.499154 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:16.499165 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.499176 | orchestrator | 2025-08-29 14:45:16.499187 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 14:45:16.499198 | orchestrator | Friday 29 August 2025 14:45:12 +0000 (0:00:00.152) 0:00:37.049 ********* 2025-08-29 14:45:16.499253 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:16.499266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:16.499277 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.499287 | orchestrator | 2025-08-29 14:45:16.499298 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 14:45:16.499309 | orchestrator | Friday 29 August 2025 14:45:12 +0000 (0:00:00.147) 0:00:37.196 ********* 2025-08-29 14:45:16.499320 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.499330 | orchestrator | 2025-08-29 14:45:16.499341 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 14:45:16.499351 | orchestrator | Friday 29 August 2025 14:45:12 +0000 (0:00:00.109) 0:00:37.306 ********* 2025-08-29 14:45:16.499362 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.499372 | orchestrator | 2025-08-29 14:45:16.499383 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 14:45:16.499393 | orchestrator | Friday 29 August 2025 14:45:13 +0000 (0:00:00.146) 0:00:37.453 ********* 2025-08-29 14:45:16.499404 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.499414 | orchestrator | 2025-08-29 14:45:16.499425 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 14:45:16.499436 | orchestrator | Friday 29 August 2025 14:45:13 +0000 (0:00:00.114) 0:00:37.567 ********* 2025-08-29 14:45:16.499446 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:45:16.499457 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 14:45:16.499468 | orchestrator | } 2025-08-29 14:45:16.499478 | orchestrator | 2025-08-29 14:45:16.499489 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 14:45:16.499500 | orchestrator | Friday 29 August 2025 14:45:13 +0000 (0:00:00.122) 0:00:37.690 ********* 2025-08-29 14:45:16.499511 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:45:16.499521 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 14:45:16.499532 | orchestrator | } 2025-08-29 14:45:16.499542 | orchestrator | 2025-08-29 14:45:16.499553 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 14:45:16.499563 | orchestrator | Friday 29 August 2025 14:45:13 +0000 (0:00:00.125) 0:00:37.815 ********* 2025-08-29 14:45:16.499574 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:45:16.499584 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 14:45:16.499603 | orchestrator | } 2025-08-29 14:45:16.499613 | orchestrator | 2025-08-29 14:45:16.499624 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 14:45:16.499635 | orchestrator | Friday 29 August 2025 14:45:13 +0000 (0:00:00.150) 0:00:37.966 ********* 2025-08-29 14:45:16.499645 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:16.499656 | orchestrator | 2025-08-29 14:45:16.499666 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 14:45:16.499677 | orchestrator | Friday 29 August 2025 14:45:14 +0000 (0:00:00.758) 0:00:38.724 ********* 2025-08-29 14:45:16.499688 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:16.499699 | orchestrator | 2025-08-29 14:45:16.499709 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 14:45:16.499720 | orchestrator | Friday 29 August 2025 14:45:14 +0000 (0:00:00.530) 0:00:39.255 ********* 2025-08-29 14:45:16.499731 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:16.499742 | orchestrator | 2025-08-29 14:45:16.499752 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 14:45:16.499763 | orchestrator | Friday 29 August 2025 14:45:15 +0000 (0:00:00.524) 0:00:39.780 ********* 2025-08-29 14:45:16.499774 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:16.499784 | orchestrator | 2025-08-29 14:45:16.499795 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 14:45:16.499805 | orchestrator | Friday 29 August 2025 14:45:15 +0000 (0:00:00.152) 0:00:39.932 ********* 2025-08-29 14:45:16.499816 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.499826 | orchestrator | 2025-08-29 14:45:16.499837 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 14:45:16.499847 | orchestrator | Friday 29 August 2025 14:45:15 +0000 (0:00:00.112) 0:00:40.045 ********* 2025-08-29 14:45:16.499858 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.499868 | orchestrator | 2025-08-29 14:45:16.499879 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 14:45:16.499890 | orchestrator | Friday 29 August 2025 14:45:15 +0000 (0:00:00.112) 0:00:40.157 ********* 2025-08-29 14:45:16.499900 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:45:16.499911 | orchestrator |  "vgs_report": { 2025-08-29 14:45:16.499922 | orchestrator |  "vg": [] 2025-08-29 14:45:16.499934 | orchestrator |  } 2025-08-29 14:45:16.499944 | orchestrator | } 2025-08-29 14:45:16.499955 | orchestrator | 2025-08-29 14:45:16.499965 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 14:45:16.499976 | orchestrator | Friday 29 August 2025 14:45:15 +0000 (0:00:00.160) 0:00:40.318 ********* 2025-08-29 14:45:16.499986 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.499997 | orchestrator | 2025-08-29 14:45:16.500008 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 14:45:16.500018 | orchestrator | Friday 29 August 2025 14:45:16 +0000 (0:00:00.144) 0:00:40.462 ********* 2025-08-29 14:45:16.500029 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.500039 | orchestrator | 2025-08-29 14:45:16.500059 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 14:45:16.500070 | orchestrator | Friday 29 August 2025 14:45:16 +0000 (0:00:00.135) 0:00:40.597 ********* 2025-08-29 14:45:16.500081 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.500091 | orchestrator | 2025-08-29 14:45:16.500102 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 14:45:16.500113 | orchestrator | Friday 29 August 2025 14:45:16 +0000 (0:00:00.144) 0:00:40.742 ********* 2025-08-29 14:45:16.500123 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:16.500134 | orchestrator | 2025-08-29 14:45:16.500145 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 14:45:16.500163 | orchestrator | Friday 29 August 2025 14:45:16 +0000 (0:00:00.151) 0:00:40.894 ********* 2025-08-29 14:45:21.532800 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.532936 | orchestrator | 2025-08-29 14:45:21.532978 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 14:45:21.532985 | orchestrator | Friday 29 August 2025 14:45:16 +0000 (0:00:00.143) 0:00:41.037 ********* 2025-08-29 14:45:21.532989 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.532993 | orchestrator | 2025-08-29 14:45:21.532997 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 14:45:21.533001 | orchestrator | Friday 29 August 2025 14:45:17 +0000 (0:00:00.429) 0:00:41.467 ********* 2025-08-29 14:45:21.533005 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533009 | orchestrator | 2025-08-29 14:45:21.533013 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 14:45:21.533017 | orchestrator | Friday 29 August 2025 14:45:17 +0000 (0:00:00.160) 0:00:41.627 ********* 2025-08-29 14:45:21.533021 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533025 | orchestrator | 2025-08-29 14:45:21.533029 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 14:45:21.533033 | orchestrator | Friday 29 August 2025 14:45:17 +0000 (0:00:00.154) 0:00:41.781 ********* 2025-08-29 14:45:21.533036 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533040 | orchestrator | 2025-08-29 14:45:21.533044 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 14:45:21.533048 | orchestrator | Friday 29 August 2025 14:45:17 +0000 (0:00:00.139) 0:00:41.921 ********* 2025-08-29 14:45:21.533051 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533055 | orchestrator | 2025-08-29 14:45:21.533059 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 14:45:21.533063 | orchestrator | Friday 29 August 2025 14:45:17 +0000 (0:00:00.150) 0:00:42.071 ********* 2025-08-29 14:45:21.533067 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533070 | orchestrator | 2025-08-29 14:45:21.533074 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 14:45:21.533078 | orchestrator | Friday 29 August 2025 14:45:17 +0000 (0:00:00.133) 0:00:42.205 ********* 2025-08-29 14:45:21.533082 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533086 | orchestrator | 2025-08-29 14:45:21.533089 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 14:45:21.533093 | orchestrator | Friday 29 August 2025 14:45:17 +0000 (0:00:00.134) 0:00:42.339 ********* 2025-08-29 14:45:21.533097 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533101 | orchestrator | 2025-08-29 14:45:21.533105 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 14:45:21.533108 | orchestrator | Friday 29 August 2025 14:45:18 +0000 (0:00:00.122) 0:00:42.462 ********* 2025-08-29 14:45:21.533112 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533116 | orchestrator | 2025-08-29 14:45:21.533120 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 14:45:21.533124 | orchestrator | Friday 29 August 2025 14:45:18 +0000 (0:00:00.148) 0:00:42.610 ********* 2025-08-29 14:45:21.533143 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:21.533149 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:21.533153 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533157 | orchestrator | 2025-08-29 14:45:21.533161 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 14:45:21.533165 | orchestrator | Friday 29 August 2025 14:45:18 +0000 (0:00:00.160) 0:00:42.771 ********* 2025-08-29 14:45:21.533168 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:21.533172 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:21.533180 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533184 | orchestrator | 2025-08-29 14:45:21.533187 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 14:45:21.533191 | orchestrator | Friday 29 August 2025 14:45:18 +0000 (0:00:00.166) 0:00:42.938 ********* 2025-08-29 14:45:21.533195 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:21.533199 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:21.533227 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533232 | orchestrator | 2025-08-29 14:45:21.533236 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 14:45:21.533240 | orchestrator | Friday 29 August 2025 14:45:18 +0000 (0:00:00.157) 0:00:43.095 ********* 2025-08-29 14:45:21.533243 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:21.533249 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:21.533255 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533260 | orchestrator | 2025-08-29 14:45:21.533266 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 14:45:21.533292 | orchestrator | Friday 29 August 2025 14:45:19 +0000 (0:00:00.397) 0:00:43.493 ********* 2025-08-29 14:45:21.533300 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:21.533306 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:21.533312 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533318 | orchestrator | 2025-08-29 14:45:21.533325 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 14:45:21.533331 | orchestrator | Friday 29 August 2025 14:45:19 +0000 (0:00:00.186) 0:00:43.680 ********* 2025-08-29 14:45:21.533338 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:21.533343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:21.533348 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533354 | orchestrator | 2025-08-29 14:45:21.533359 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 14:45:21.533364 | orchestrator | Friday 29 August 2025 14:45:19 +0000 (0:00:00.169) 0:00:43.849 ********* 2025-08-29 14:45:21.533369 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:21.533375 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:21.533381 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533387 | orchestrator | 2025-08-29 14:45:21.533393 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 14:45:21.533400 | orchestrator | Friday 29 August 2025 14:45:19 +0000 (0:00:00.202) 0:00:44.052 ********* 2025-08-29 14:45:21.533405 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:21.533418 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:21.533424 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533431 | orchestrator | 2025-08-29 14:45:21.533443 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 14:45:21.533449 | orchestrator | Friday 29 August 2025 14:45:19 +0000 (0:00:00.164) 0:00:44.217 ********* 2025-08-29 14:45:21.533455 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:21.533461 | orchestrator | 2025-08-29 14:45:21.533468 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 14:45:21.533474 | orchestrator | Friday 29 August 2025 14:45:20 +0000 (0:00:00.512) 0:00:44.729 ********* 2025-08-29 14:45:21.533480 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:21.533486 | orchestrator | 2025-08-29 14:45:21.533492 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 14:45:21.533499 | orchestrator | Friday 29 August 2025 14:45:20 +0000 (0:00:00.516) 0:00:45.245 ********* 2025-08-29 14:45:21.533505 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:21.533511 | orchestrator | 2025-08-29 14:45:21.533517 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 14:45:21.533524 | orchestrator | Friday 29 August 2025 14:45:20 +0000 (0:00:00.154) 0:00:45.400 ********* 2025-08-29 14:45:21.533531 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'vg_name': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'}) 2025-08-29 14:45:21.533539 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'vg_name': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'}) 2025-08-29 14:45:21.533546 | orchestrator | 2025-08-29 14:45:21.533549 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 14:45:21.533553 | orchestrator | Friday 29 August 2025 14:45:21 +0000 (0:00:00.192) 0:00:45.593 ********* 2025-08-29 14:45:21.533557 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:21.533561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:21.533567 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:21.533573 | orchestrator | 2025-08-29 14:45:21.533579 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 14:45:21.533585 | orchestrator | Friday 29 August 2025 14:45:21 +0000 (0:00:00.181) 0:00:45.774 ********* 2025-08-29 14:45:21.533591 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:21.533598 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:21.533611 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:27.866430 | orchestrator | 2025-08-29 14:45:27.866545 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 14:45:27.866562 | orchestrator | Friday 29 August 2025 14:45:21 +0000 (0:00:00.153) 0:00:45.928 ********* 2025-08-29 14:45:27.866575 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'})  2025-08-29 14:45:27.866589 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'})  2025-08-29 14:45:27.866600 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:27.866612 | orchestrator | 2025-08-29 14:45:27.866623 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 14:45:27.866634 | orchestrator | Friday 29 August 2025 14:45:21 +0000 (0:00:00.159) 0:00:46.087 ********* 2025-08-29 14:45:27.866672 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:45:27.866684 | orchestrator |  "lvm_report": { 2025-08-29 14:45:27.866698 | orchestrator |  "lv": [ 2025-08-29 14:45:27.866709 | orchestrator |  { 2025-08-29 14:45:27.866720 | orchestrator |  "lv_name": "osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373", 2025-08-29 14:45:27.866732 | orchestrator |  "vg_name": "ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373" 2025-08-29 14:45:27.866743 | orchestrator |  }, 2025-08-29 14:45:27.866754 | orchestrator |  { 2025-08-29 14:45:27.866765 | orchestrator |  "lv_name": "osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc", 2025-08-29 14:45:27.866775 | orchestrator |  "vg_name": "ceph-8955e74f-f88a-5c8e-a869-5f490c143acc" 2025-08-29 14:45:27.866786 | orchestrator |  } 2025-08-29 14:45:27.866796 | orchestrator |  ], 2025-08-29 14:45:27.866807 | orchestrator |  "pv": [ 2025-08-29 14:45:27.866818 | orchestrator |  { 2025-08-29 14:45:27.866828 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 14:45:27.866839 | orchestrator |  "vg_name": "ceph-8955e74f-f88a-5c8e-a869-5f490c143acc" 2025-08-29 14:45:27.866850 | orchestrator |  }, 2025-08-29 14:45:27.866860 | orchestrator |  { 2025-08-29 14:45:27.866871 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 14:45:27.866882 | orchestrator |  "vg_name": "ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373" 2025-08-29 14:45:27.866892 | orchestrator |  } 2025-08-29 14:45:27.866903 | orchestrator |  ] 2025-08-29 14:45:27.866914 | orchestrator |  } 2025-08-29 14:45:27.866925 | orchestrator | } 2025-08-29 14:45:27.866936 | orchestrator | 2025-08-29 14:45:27.866949 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 14:45:27.866961 | orchestrator | 2025-08-29 14:45:27.866973 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:45:27.866985 | orchestrator | Friday 29 August 2025 14:45:22 +0000 (0:00:00.563) 0:00:46.651 ********* 2025-08-29 14:45:27.866997 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 14:45:27.867009 | orchestrator | 2025-08-29 14:45:27.867021 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:45:27.867033 | orchestrator | Friday 29 August 2025 14:45:22 +0000 (0:00:00.255) 0:00:46.906 ********* 2025-08-29 14:45:27.867045 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:45:27.867058 | orchestrator | 2025-08-29 14:45:27.867070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.867082 | orchestrator | Friday 29 August 2025 14:45:22 +0000 (0:00:00.246) 0:00:47.152 ********* 2025-08-29 14:45:27.867094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:45:27.867106 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:45:27.867118 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:45:27.867130 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:45:27.867141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:45:27.867152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:45:27.867163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:45:27.867179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:45:27.867197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 14:45:27.867245 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:45:27.867265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:45:27.867296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:45:27.867315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:45:27.867327 | orchestrator | 2025-08-29 14:45:27.867338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.867348 | orchestrator | Friday 29 August 2025 14:45:23 +0000 (0:00:00.406) 0:00:47.558 ********* 2025-08-29 14:45:27.867359 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:27.867374 | orchestrator | 2025-08-29 14:45:27.867385 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.867396 | orchestrator | Friday 29 August 2025 14:45:23 +0000 (0:00:00.189) 0:00:47.748 ********* 2025-08-29 14:45:27.867407 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:27.867418 | orchestrator | 2025-08-29 14:45:27.867429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.867457 | orchestrator | Friday 29 August 2025 14:45:23 +0000 (0:00:00.200) 0:00:47.949 ********* 2025-08-29 14:45:27.867468 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:27.867479 | orchestrator | 2025-08-29 14:45:27.867490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.867501 | orchestrator | Friday 29 August 2025 14:45:23 +0000 (0:00:00.184) 0:00:48.133 ********* 2025-08-29 14:45:27.867512 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:27.867523 | orchestrator | 2025-08-29 14:45:27.867533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.867544 | orchestrator | Friday 29 August 2025 14:45:23 +0000 (0:00:00.198) 0:00:48.331 ********* 2025-08-29 14:45:27.867555 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:27.867566 | orchestrator | 2025-08-29 14:45:27.867576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.867587 | orchestrator | Friday 29 August 2025 14:45:24 +0000 (0:00:00.195) 0:00:48.527 ********* 2025-08-29 14:45:27.867598 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:27.867609 | orchestrator | 2025-08-29 14:45:27.867620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.867631 | orchestrator | Friday 29 August 2025 14:45:24 +0000 (0:00:00.726) 0:00:49.253 ********* 2025-08-29 14:45:27.867642 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:27.867652 | orchestrator | 2025-08-29 14:45:27.867663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.867674 | orchestrator | Friday 29 August 2025 14:45:25 +0000 (0:00:00.219) 0:00:49.472 ********* 2025-08-29 14:45:27.867685 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:27.867696 | orchestrator | 2025-08-29 14:45:27.867706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.867717 | orchestrator | Friday 29 August 2025 14:45:25 +0000 (0:00:00.214) 0:00:49.687 ********* 2025-08-29 14:45:27.867728 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d) 2025-08-29 14:45:27.867800 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d) 2025-08-29 14:45:27.867813 | orchestrator | 2025-08-29 14:45:27.867824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.867835 | orchestrator | Friday 29 August 2025 14:45:25 +0000 (0:00:00.404) 0:00:50.091 ********* 2025-08-29 14:45:27.867846 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e2b81981-b087-421f-a1f1-ab20210f7cdd) 2025-08-29 14:45:27.867857 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e2b81981-b087-421f-a1f1-ab20210f7cdd) 2025-08-29 14:45:27.867868 | orchestrator | 2025-08-29 14:45:27.867879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.867890 | orchestrator | Friday 29 August 2025 14:45:26 +0000 (0:00:00.435) 0:00:50.526 ********* 2025-08-29 14:45:27.867912 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4ecbb96d-8085-4234-aac0-aef459b35ca9) 2025-08-29 14:45:27.867923 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4ecbb96d-8085-4234-aac0-aef459b35ca9) 2025-08-29 14:45:27.867934 | orchestrator | 2025-08-29 14:45:27.867945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.867956 | orchestrator | Friday 29 August 2025 14:45:26 +0000 (0:00:00.438) 0:00:50.965 ********* 2025-08-29 14:45:27.867967 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d88be871-1de9-4c4e-96cc-2f99c6f9bcd6) 2025-08-29 14:45:27.867978 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d88be871-1de9-4c4e-96cc-2f99c6f9bcd6) 2025-08-29 14:45:27.867988 | orchestrator | 2025-08-29 14:45:27.867999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:27.868010 | orchestrator | Friday 29 August 2025 14:45:27 +0000 (0:00:00.475) 0:00:51.441 ********* 2025-08-29 14:45:27.868021 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:45:27.868032 | orchestrator | 2025-08-29 14:45:27.868043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:27.868053 | orchestrator | Friday 29 August 2025 14:45:27 +0000 (0:00:00.408) 0:00:51.850 ********* 2025-08-29 14:45:27.868064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:45:27.868075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:45:27.868086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:45:27.868096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:45:27.868107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:45:27.868118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:45:27.868129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:45:27.868139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:45:27.868150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 14:45:27.868161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:45:27.868172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:45:27.868190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:45:37.514787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:45:37.514924 | orchestrator | 2025-08-29 14:45:37.514951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:37.514971 | orchestrator | Friday 29 August 2025 14:45:27 +0000 (0:00:00.405) 0:00:52.255 ********* 2025-08-29 14:45:37.514990 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.515009 | orchestrator | 2025-08-29 14:45:37.515030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:37.515050 | orchestrator | Friday 29 August 2025 14:45:28 +0000 (0:00:00.215) 0:00:52.470 ********* 2025-08-29 14:45:37.515068 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.515084 | orchestrator | 2025-08-29 14:45:37.515095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:37.515107 | orchestrator | Friday 29 August 2025 14:45:28 +0000 (0:00:00.226) 0:00:52.697 ********* 2025-08-29 14:45:37.515117 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.515128 | orchestrator | 2025-08-29 14:45:37.515139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:37.515176 | orchestrator | Friday 29 August 2025 14:45:29 +0000 (0:00:00.767) 0:00:53.465 ********* 2025-08-29 14:45:37.515187 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.515198 | orchestrator | 2025-08-29 14:45:37.515208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:37.515254 | orchestrator | Friday 29 August 2025 14:45:29 +0000 (0:00:00.213) 0:00:53.679 ********* 2025-08-29 14:45:37.515265 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.515276 | orchestrator | 2025-08-29 14:45:37.515288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:37.515300 | orchestrator | Friday 29 August 2025 14:45:29 +0000 (0:00:00.267) 0:00:53.946 ********* 2025-08-29 14:45:37.515312 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.515324 | orchestrator | 2025-08-29 14:45:37.515336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:37.515349 | orchestrator | Friday 29 August 2025 14:45:29 +0000 (0:00:00.222) 0:00:54.169 ********* 2025-08-29 14:45:37.515361 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.515373 | orchestrator | 2025-08-29 14:45:37.515385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:37.515397 | orchestrator | Friday 29 August 2025 14:45:30 +0000 (0:00:00.255) 0:00:54.424 ********* 2025-08-29 14:45:37.515409 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.515420 | orchestrator | 2025-08-29 14:45:37.515433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:37.515445 | orchestrator | Friday 29 August 2025 14:45:30 +0000 (0:00:00.206) 0:00:54.630 ********* 2025-08-29 14:45:37.515457 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 14:45:37.515470 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 14:45:37.515499 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 14:45:37.515511 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 14:45:37.515523 | orchestrator | 2025-08-29 14:45:37.515535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:37.515552 | orchestrator | Friday 29 August 2025 14:45:31 +0000 (0:00:00.827) 0:00:55.458 ********* 2025-08-29 14:45:37.515570 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.515590 | orchestrator | 2025-08-29 14:45:37.515607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:37.515625 | orchestrator | Friday 29 August 2025 14:45:31 +0000 (0:00:00.188) 0:00:55.646 ********* 2025-08-29 14:45:37.515649 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.515675 | orchestrator | 2025-08-29 14:45:37.515693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:37.515712 | orchestrator | Friday 29 August 2025 14:45:31 +0000 (0:00:00.223) 0:00:55.870 ********* 2025-08-29 14:45:37.515729 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.515740 | orchestrator | 2025-08-29 14:45:37.515751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:37.515761 | orchestrator | Friday 29 August 2025 14:45:31 +0000 (0:00:00.200) 0:00:56.070 ********* 2025-08-29 14:45:37.515772 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.515783 | orchestrator | 2025-08-29 14:45:37.515794 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 14:45:37.515804 | orchestrator | Friday 29 August 2025 14:45:31 +0000 (0:00:00.208) 0:00:56.278 ********* 2025-08-29 14:45:37.515815 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.515826 | orchestrator | 2025-08-29 14:45:37.515836 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 14:45:37.515847 | orchestrator | Friday 29 August 2025 14:45:32 +0000 (0:00:00.398) 0:00:56.676 ********* 2025-08-29 14:45:37.515858 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'}}) 2025-08-29 14:45:37.515869 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'}}) 2025-08-29 14:45:37.515892 | orchestrator | 2025-08-29 14:45:37.515903 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 14:45:37.515913 | orchestrator | Friday 29 August 2025 14:45:32 +0000 (0:00:00.197) 0:00:56.874 ********* 2025-08-29 14:45:37.515926 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'}) 2025-08-29 14:45:37.515938 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'}) 2025-08-29 14:45:37.515949 | orchestrator | 2025-08-29 14:45:37.515960 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 14:45:37.515991 | orchestrator | Friday 29 August 2025 14:45:34 +0000 (0:00:01.877) 0:00:58.751 ********* 2025-08-29 14:45:37.516003 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:37.516015 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:37.516026 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.516037 | orchestrator | 2025-08-29 14:45:37.516048 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 14:45:37.516059 | orchestrator | Friday 29 August 2025 14:45:34 +0000 (0:00:00.180) 0:00:58.931 ********* 2025-08-29 14:45:37.516070 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'}) 2025-08-29 14:45:37.516081 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'}) 2025-08-29 14:45:37.516092 | orchestrator | 2025-08-29 14:45:37.516103 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 14:45:37.516114 | orchestrator | Friday 29 August 2025 14:45:35 +0000 (0:00:01.339) 0:01:00.271 ********* 2025-08-29 14:45:37.516125 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:37.516136 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:37.516147 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.516158 | orchestrator | 2025-08-29 14:45:37.516168 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 14:45:37.516179 | orchestrator | Friday 29 August 2025 14:45:36 +0000 (0:00:00.153) 0:01:00.424 ********* 2025-08-29 14:45:37.516190 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.516201 | orchestrator | 2025-08-29 14:45:37.516257 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 14:45:37.516272 | orchestrator | Friday 29 August 2025 14:45:36 +0000 (0:00:00.159) 0:01:00.584 ********* 2025-08-29 14:45:37.516283 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:37.516302 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:37.516313 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.516324 | orchestrator | 2025-08-29 14:45:37.516334 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 14:45:37.516345 | orchestrator | Friday 29 August 2025 14:45:36 +0000 (0:00:00.156) 0:01:00.740 ********* 2025-08-29 14:45:37.516356 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.516375 | orchestrator | 2025-08-29 14:45:37.516386 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 14:45:37.516405 | orchestrator | Friday 29 August 2025 14:45:36 +0000 (0:00:00.131) 0:01:00.872 ********* 2025-08-29 14:45:37.516423 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:37.516440 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:37.516462 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.516487 | orchestrator | 2025-08-29 14:45:37.516505 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 14:45:37.516525 | orchestrator | Friday 29 August 2025 14:45:36 +0000 (0:00:00.157) 0:01:01.030 ********* 2025-08-29 14:45:37.516543 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.516562 | orchestrator | 2025-08-29 14:45:37.516573 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 14:45:37.516584 | orchestrator | Friday 29 August 2025 14:45:36 +0000 (0:00:00.138) 0:01:01.168 ********* 2025-08-29 14:45:37.516595 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:37.516606 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:37.516617 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.516628 | orchestrator | 2025-08-29 14:45:37.516639 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 14:45:37.516650 | orchestrator | Friday 29 August 2025 14:45:36 +0000 (0:00:00.161) 0:01:01.329 ********* 2025-08-29 14:45:37.516660 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:45:37.516671 | orchestrator | 2025-08-29 14:45:37.516682 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 14:45:37.516693 | orchestrator | Friday 29 August 2025 14:45:37 +0000 (0:00:00.420) 0:01:01.750 ********* 2025-08-29 14:45:37.516717 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:43.725854 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:43.725986 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.726005 | orchestrator | 2025-08-29 14:45:43.726060 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 14:45:43.726077 | orchestrator | Friday 29 August 2025 14:45:37 +0000 (0:00:00.161) 0:01:01.912 ********* 2025-08-29 14:45:43.726089 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:43.726101 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:43.726113 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.726124 | orchestrator | 2025-08-29 14:45:43.726136 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 14:45:43.726147 | orchestrator | Friday 29 August 2025 14:45:37 +0000 (0:00:00.137) 0:01:02.049 ********* 2025-08-29 14:45:43.726158 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:43.726169 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:43.726180 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.726284 | orchestrator | 2025-08-29 14:45:43.726298 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 14:45:43.726309 | orchestrator | Friday 29 August 2025 14:45:37 +0000 (0:00:00.159) 0:01:02.208 ********* 2025-08-29 14:45:43.726320 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.726331 | orchestrator | 2025-08-29 14:45:43.726342 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 14:45:43.726353 | orchestrator | Friday 29 August 2025 14:45:37 +0000 (0:00:00.167) 0:01:02.375 ********* 2025-08-29 14:45:43.726364 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.726374 | orchestrator | 2025-08-29 14:45:43.726385 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 14:45:43.726396 | orchestrator | Friday 29 August 2025 14:45:38 +0000 (0:00:00.139) 0:01:02.515 ********* 2025-08-29 14:45:43.726407 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.726417 | orchestrator | 2025-08-29 14:45:43.726428 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 14:45:43.726439 | orchestrator | Friday 29 August 2025 14:45:38 +0000 (0:00:00.152) 0:01:02.667 ********* 2025-08-29 14:45:43.726450 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:45:43.726462 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 14:45:43.726473 | orchestrator | } 2025-08-29 14:45:43.726484 | orchestrator | 2025-08-29 14:45:43.726495 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 14:45:43.726506 | orchestrator | Friday 29 August 2025 14:45:38 +0000 (0:00:00.156) 0:01:02.823 ********* 2025-08-29 14:45:43.726517 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:45:43.726527 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 14:45:43.726538 | orchestrator | } 2025-08-29 14:45:43.726549 | orchestrator | 2025-08-29 14:45:43.726559 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 14:45:43.726572 | orchestrator | Friday 29 August 2025 14:45:38 +0000 (0:00:00.136) 0:01:02.960 ********* 2025-08-29 14:45:43.726582 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:45:43.726593 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 14:45:43.726604 | orchestrator | } 2025-08-29 14:45:43.726615 | orchestrator | 2025-08-29 14:45:43.726626 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 14:45:43.726636 | orchestrator | Friday 29 August 2025 14:45:38 +0000 (0:00:00.137) 0:01:03.098 ********* 2025-08-29 14:45:43.726647 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:45:43.726658 | orchestrator | 2025-08-29 14:45:43.726669 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 14:45:43.726680 | orchestrator | Friday 29 August 2025 14:45:39 +0000 (0:00:00.527) 0:01:03.625 ********* 2025-08-29 14:45:43.726690 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:45:43.726701 | orchestrator | 2025-08-29 14:45:43.726712 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 14:45:43.726723 | orchestrator | Friday 29 August 2025 14:45:39 +0000 (0:00:00.503) 0:01:04.129 ********* 2025-08-29 14:45:43.726733 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:45:43.726744 | orchestrator | 2025-08-29 14:45:43.726755 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 14:45:43.726766 | orchestrator | Friday 29 August 2025 14:45:40 +0000 (0:00:00.796) 0:01:04.925 ********* 2025-08-29 14:45:43.726777 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:45:43.726787 | orchestrator | 2025-08-29 14:45:43.726798 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 14:45:43.726809 | orchestrator | Friday 29 August 2025 14:45:40 +0000 (0:00:00.156) 0:01:05.082 ********* 2025-08-29 14:45:43.726819 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.726830 | orchestrator | 2025-08-29 14:45:43.726841 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 14:45:43.726852 | orchestrator | Friday 29 August 2025 14:45:40 +0000 (0:00:00.102) 0:01:05.184 ********* 2025-08-29 14:45:43.726873 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.726884 | orchestrator | 2025-08-29 14:45:43.726894 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 14:45:43.726905 | orchestrator | Friday 29 August 2025 14:45:40 +0000 (0:00:00.112) 0:01:05.297 ********* 2025-08-29 14:45:43.726916 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:45:43.726927 | orchestrator |  "vgs_report": { 2025-08-29 14:45:43.726939 | orchestrator |  "vg": [] 2025-08-29 14:45:43.726969 | orchestrator |  } 2025-08-29 14:45:43.726981 | orchestrator | } 2025-08-29 14:45:43.726991 | orchestrator | 2025-08-29 14:45:43.727002 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 14:45:43.727013 | orchestrator | Friday 29 August 2025 14:45:41 +0000 (0:00:00.159) 0:01:05.457 ********* 2025-08-29 14:45:43.727024 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727034 | orchestrator | 2025-08-29 14:45:43.727045 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 14:45:43.727056 | orchestrator | Friday 29 August 2025 14:45:41 +0000 (0:00:00.136) 0:01:05.594 ********* 2025-08-29 14:45:43.727067 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727077 | orchestrator | 2025-08-29 14:45:43.727088 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 14:45:43.727099 | orchestrator | Friday 29 August 2025 14:45:41 +0000 (0:00:00.141) 0:01:05.736 ********* 2025-08-29 14:45:43.727110 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727120 | orchestrator | 2025-08-29 14:45:43.727131 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 14:45:43.727142 | orchestrator | Friday 29 August 2025 14:45:41 +0000 (0:00:00.168) 0:01:05.904 ********* 2025-08-29 14:45:43.727152 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727163 | orchestrator | 2025-08-29 14:45:43.727173 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 14:45:43.727200 | orchestrator | Friday 29 August 2025 14:45:41 +0000 (0:00:00.138) 0:01:06.042 ********* 2025-08-29 14:45:43.727211 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727244 | orchestrator | 2025-08-29 14:45:43.727255 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 14:45:43.727266 | orchestrator | Friday 29 August 2025 14:45:41 +0000 (0:00:00.147) 0:01:06.190 ********* 2025-08-29 14:45:43.727277 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727288 | orchestrator | 2025-08-29 14:45:43.727299 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 14:45:43.727309 | orchestrator | Friday 29 August 2025 14:45:41 +0000 (0:00:00.133) 0:01:06.323 ********* 2025-08-29 14:45:43.727320 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727331 | orchestrator | 2025-08-29 14:45:43.727341 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 14:45:43.727352 | orchestrator | Friday 29 August 2025 14:45:42 +0000 (0:00:00.136) 0:01:06.460 ********* 2025-08-29 14:45:43.727363 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727373 | orchestrator | 2025-08-29 14:45:43.727384 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 14:45:43.727395 | orchestrator | Friday 29 August 2025 14:45:42 +0000 (0:00:00.124) 0:01:06.584 ********* 2025-08-29 14:45:43.727405 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727416 | orchestrator | 2025-08-29 14:45:43.727427 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 14:45:43.727443 | orchestrator | Friday 29 August 2025 14:45:42 +0000 (0:00:00.388) 0:01:06.972 ********* 2025-08-29 14:45:43.727454 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727465 | orchestrator | 2025-08-29 14:45:43.727476 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 14:45:43.727486 | orchestrator | Friday 29 August 2025 14:45:42 +0000 (0:00:00.142) 0:01:07.115 ********* 2025-08-29 14:45:43.727497 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727516 | orchestrator | 2025-08-29 14:45:43.727527 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 14:45:43.727537 | orchestrator | Friday 29 August 2025 14:45:42 +0000 (0:00:00.138) 0:01:07.253 ********* 2025-08-29 14:45:43.727548 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727559 | orchestrator | 2025-08-29 14:45:43.727570 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 14:45:43.727580 | orchestrator | Friday 29 August 2025 14:45:42 +0000 (0:00:00.138) 0:01:07.391 ********* 2025-08-29 14:45:43.727591 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727602 | orchestrator | 2025-08-29 14:45:43.727613 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 14:45:43.727623 | orchestrator | Friday 29 August 2025 14:45:43 +0000 (0:00:00.129) 0:01:07.521 ********* 2025-08-29 14:45:43.727634 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727644 | orchestrator | 2025-08-29 14:45:43.727655 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 14:45:43.727666 | orchestrator | Friday 29 August 2025 14:45:43 +0000 (0:00:00.137) 0:01:07.658 ********* 2025-08-29 14:45:43.727676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:43.727687 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:43.727698 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727709 | orchestrator | 2025-08-29 14:45:43.727720 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 14:45:43.727731 | orchestrator | Friday 29 August 2025 14:45:43 +0000 (0:00:00.152) 0:01:07.811 ********* 2025-08-29 14:45:43.727742 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:43.727753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:43.727764 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:43.727774 | orchestrator | 2025-08-29 14:45:43.727785 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 14:45:43.727796 | orchestrator | Friday 29 August 2025 14:45:43 +0000 (0:00:00.157) 0:01:07.968 ********* 2025-08-29 14:45:43.727814 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:46.731119 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:46.731261 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:46.731275 | orchestrator | 2025-08-29 14:45:46.731286 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 14:45:46.731296 | orchestrator | Friday 29 August 2025 14:45:43 +0000 (0:00:00.154) 0:01:08.123 ********* 2025-08-29 14:45:46.731303 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:46.731310 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:46.731317 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:46.731323 | orchestrator | 2025-08-29 14:45:46.731330 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 14:45:46.731336 | orchestrator | Friday 29 August 2025 14:45:43 +0000 (0:00:00.160) 0:01:08.284 ********* 2025-08-29 14:45:46.731342 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:46.731381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:46.731388 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:46.731395 | orchestrator | 2025-08-29 14:45:46.731401 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 14:45:46.731408 | orchestrator | Friday 29 August 2025 14:45:44 +0000 (0:00:00.148) 0:01:08.432 ********* 2025-08-29 14:45:46.731414 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:46.731420 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:46.731426 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:46.731432 | orchestrator | 2025-08-29 14:45:46.731457 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 14:45:46.731463 | orchestrator | Friday 29 August 2025 14:45:44 +0000 (0:00:00.156) 0:01:08.589 ********* 2025-08-29 14:45:46.731469 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:46.731475 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:46.731482 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:46.731488 | orchestrator | 2025-08-29 14:45:46.731494 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 14:45:46.731501 | orchestrator | Friday 29 August 2025 14:45:44 +0000 (0:00:00.392) 0:01:08.982 ********* 2025-08-29 14:45:46.731508 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:46.731514 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:46.731520 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:46.731526 | orchestrator | 2025-08-29 14:45:46.731532 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 14:45:46.731538 | orchestrator | Friday 29 August 2025 14:45:44 +0000 (0:00:00.184) 0:01:09.166 ********* 2025-08-29 14:45:46.731544 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:45:46.731552 | orchestrator | 2025-08-29 14:45:46.731559 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 14:45:46.731565 | orchestrator | Friday 29 August 2025 14:45:45 +0000 (0:00:00.531) 0:01:09.698 ********* 2025-08-29 14:45:46.731572 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:45:46.731578 | orchestrator | 2025-08-29 14:45:46.731584 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 14:45:46.731590 | orchestrator | Friday 29 August 2025 14:45:45 +0000 (0:00:00.495) 0:01:10.193 ********* 2025-08-29 14:45:46.731596 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:45:46.731603 | orchestrator | 2025-08-29 14:45:46.731609 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 14:45:46.731615 | orchestrator | Friday 29 August 2025 14:45:45 +0000 (0:00:00.146) 0:01:10.340 ********* 2025-08-29 14:45:46.731621 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'vg_name': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'}) 2025-08-29 14:45:46.731629 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'vg_name': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'}) 2025-08-29 14:45:46.731635 | orchestrator | 2025-08-29 14:45:46.731642 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 14:45:46.731656 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.158) 0:01:10.498 ********* 2025-08-29 14:45:46.731680 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:46.731686 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:46.731693 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:46.731699 | orchestrator | 2025-08-29 14:45:46.731705 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 14:45:46.731712 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.146) 0:01:10.644 ********* 2025-08-29 14:45:46.731718 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:46.731724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:46.731732 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:46.731738 | orchestrator | 2025-08-29 14:45:46.731745 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 14:45:46.731751 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.158) 0:01:10.803 ********* 2025-08-29 14:45:46.731758 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'})  2025-08-29 14:45:46.731764 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'})  2025-08-29 14:45:46.731771 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:46.731777 | orchestrator | 2025-08-29 14:45:46.731783 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 14:45:46.731790 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.152) 0:01:10.956 ********* 2025-08-29 14:45:46.731796 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:45:46.731802 | orchestrator |  "lvm_report": { 2025-08-29 14:45:46.731809 | orchestrator |  "lv": [ 2025-08-29 14:45:46.731816 | orchestrator |  { 2025-08-29 14:45:46.731822 | orchestrator |  "lv_name": "osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde", 2025-08-29 14:45:46.731834 | orchestrator |  "vg_name": "ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde" 2025-08-29 14:45:46.731841 | orchestrator |  }, 2025-08-29 14:45:46.731847 | orchestrator |  { 2025-08-29 14:45:46.731853 | orchestrator |  "lv_name": "osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281", 2025-08-29 14:45:46.731860 | orchestrator |  "vg_name": "ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281" 2025-08-29 14:45:46.731866 | orchestrator |  } 2025-08-29 14:45:46.731872 | orchestrator |  ], 2025-08-29 14:45:46.731879 | orchestrator |  "pv": [ 2025-08-29 14:45:46.731885 | orchestrator |  { 2025-08-29 14:45:46.731892 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 14:45:46.731899 | orchestrator |  "vg_name": "ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281" 2025-08-29 14:45:46.731905 | orchestrator |  }, 2025-08-29 14:45:46.731911 | orchestrator |  { 2025-08-29 14:45:46.731918 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 14:45:46.731924 | orchestrator |  "vg_name": "ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde" 2025-08-29 14:45:46.731930 | orchestrator |  } 2025-08-29 14:45:46.731937 | orchestrator |  ] 2025-08-29 14:45:46.731943 | orchestrator |  } 2025-08-29 14:45:46.731949 | orchestrator | } 2025-08-29 14:45:46.731956 | orchestrator | 2025-08-29 14:45:46.731962 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:45:46.731974 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 14:45:46.731981 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 14:45:46.731987 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 14:45:46.731994 | orchestrator | 2025-08-29 14:45:46.732000 | orchestrator | 2025-08-29 14:45:46.732006 | orchestrator | 2025-08-29 14:45:46.732012 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:45:46.732019 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.150) 0:01:11.106 ********* 2025-08-29 14:45:46.732025 | orchestrator | =============================================================================== 2025-08-29 14:45:46.732031 | orchestrator | Create block VGs -------------------------------------------------------- 5.67s 2025-08-29 14:45:46.732037 | orchestrator | Create block LVs -------------------------------------------------------- 4.09s 2025-08-29 14:45:46.732043 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.97s 2025-08-29 14:45:46.732049 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.84s 2025-08-29 14:45:46.732056 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2025-08-29 14:45:46.732062 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.53s 2025-08-29 14:45:46.732068 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.49s 2025-08-29 14:45:46.732074 | orchestrator | Add known partitions to the list of available block devices ------------- 1.37s 2025-08-29 14:45:46.732084 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2025-08-29 14:45:47.224432 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2025-08-29 14:45:47.224532 | orchestrator | Print LVM report data --------------------------------------------------- 0.97s 2025-08-29 14:45:47.224538 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-08-29 14:45:47.224542 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2025-08-29 14:45:47.224546 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.75s 2025-08-29 14:45:47.224550 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-08-29 14:45:47.224554 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-08-29 14:45:47.224558 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-08-29 14:45:47.224562 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2025-08-29 14:45:47.224566 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.70s 2025-08-29 14:45:47.224570 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-08-29 14:45:59.647586 | orchestrator | 2025-08-29 14:45:59 | INFO  | Task ae653b53-0ec8-4d1b-b381-946242ce91cf (facts) was prepared for execution. 2025-08-29 14:45:59.647714 | orchestrator | 2025-08-29 14:45:59 | INFO  | It takes a moment until task ae653b53-0ec8-4d1b-b381-946242ce91cf (facts) has been started and output is visible here. 2025-08-29 14:46:11.964974 | orchestrator | 2025-08-29 14:46:11.965085 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 14:46:11.965100 | orchestrator | 2025-08-29 14:46:11.965110 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 14:46:11.965121 | orchestrator | Friday 29 August 2025 14:46:03 +0000 (0:00:00.264) 0:00:00.264 ********* 2025-08-29 14:46:11.965131 | orchestrator | ok: [testbed-manager] 2025-08-29 14:46:11.965143 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:46:11.965176 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:46:11.965187 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:46:11.965197 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:46:11.965206 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:46:11.965216 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:11.965225 | orchestrator | 2025-08-29 14:46:11.965295 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 14:46:11.965306 | orchestrator | Friday 29 August 2025 14:46:04 +0000 (0:00:01.079) 0:00:01.344 ********* 2025-08-29 14:46:11.965331 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:46:11.965342 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:46:11.965353 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:46:11.965363 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:46:11.965372 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:46:11.965382 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:11.965391 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:11.965401 | orchestrator | 2025-08-29 14:46:11.965411 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:46:11.965420 | orchestrator | 2025-08-29 14:46:11.965430 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:46:11.965440 | orchestrator | Friday 29 August 2025 14:46:06 +0000 (0:00:01.221) 0:00:02.566 ********* 2025-08-29 14:46:11.965450 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:46:11.965460 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:46:11.965469 | orchestrator | ok: [testbed-manager] 2025-08-29 14:46:11.965479 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:46:11.965488 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:46:11.965498 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:46:11.965508 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:11.965519 | orchestrator | 2025-08-29 14:46:11.965530 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 14:46:11.965541 | orchestrator | 2025-08-29 14:46:11.965551 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 14:46:11.965562 | orchestrator | Friday 29 August 2025 14:46:10 +0000 (0:00:04.889) 0:00:07.455 ********* 2025-08-29 14:46:11.965573 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:46:11.965584 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:46:11.965594 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:46:11.965605 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:46:11.965615 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:46:11.965626 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:11.965637 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:11.965647 | orchestrator | 2025-08-29 14:46:11.965657 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:46:11.965667 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:11.965678 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:11.965688 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:11.965698 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:11.965708 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:11.965717 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:11.965727 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:11.965744 | orchestrator | 2025-08-29 14:46:11.965754 | orchestrator | 2025-08-29 14:46:11.965764 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:46:11.965774 | orchestrator | Friday 29 August 2025 14:46:11 +0000 (0:00:00.533) 0:00:07.989 ********* 2025-08-29 14:46:11.965784 | orchestrator | =============================================================================== 2025-08-29 14:46:11.965793 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.89s 2025-08-29 14:46:11.965803 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2025-08-29 14:46:11.965813 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2025-08-29 14:46:11.965823 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-08-29 14:46:24.312375 | orchestrator | 2025-08-29 14:46:24 | INFO  | Task 6497e683-69e0-43e1-9d60-f049e5201ea1 (frr) was prepared for execution. 2025-08-29 14:46:24.313230 | orchestrator | 2025-08-29 14:46:24 | INFO  | It takes a moment until task 6497e683-69e0-43e1-9d60-f049e5201ea1 (frr) has been started and output is visible here. 2025-08-29 14:46:50.679761 | orchestrator | 2025-08-29 14:46:50.679864 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-08-29 14:46:50.679878 | orchestrator | 2025-08-29 14:46:50.679889 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-08-29 14:46:50.679898 | orchestrator | Friday 29 August 2025 14:46:28 +0000 (0:00:00.235) 0:00:00.235 ********* 2025-08-29 14:46:50.679908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:46:50.679919 | orchestrator | 2025-08-29 14:46:50.679928 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-08-29 14:46:50.679936 | orchestrator | Friday 29 August 2025 14:46:28 +0000 (0:00:00.218) 0:00:00.454 ********* 2025-08-29 14:46:50.679945 | orchestrator | changed: [testbed-manager] 2025-08-29 14:46:50.679955 | orchestrator | 2025-08-29 14:46:50.679964 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-08-29 14:46:50.679972 | orchestrator | Friday 29 August 2025 14:46:29 +0000 (0:00:01.137) 0:00:01.591 ********* 2025-08-29 14:46:50.679981 | orchestrator | changed: [testbed-manager] 2025-08-29 14:46:50.679990 | orchestrator | 2025-08-29 14:46:50.679999 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-08-29 14:46:50.680007 | orchestrator | Friday 29 August 2025 14:46:39 +0000 (0:00:10.146) 0:00:11.738 ********* 2025-08-29 14:46:50.680016 | orchestrator | ok: [testbed-manager] 2025-08-29 14:46:50.680026 | orchestrator | 2025-08-29 14:46:50.680035 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-08-29 14:46:50.680044 | orchestrator | Friday 29 August 2025 14:46:41 +0000 (0:00:01.295) 0:00:13.033 ********* 2025-08-29 14:46:50.680052 | orchestrator | changed: [testbed-manager] 2025-08-29 14:46:50.680061 | orchestrator | 2025-08-29 14:46:50.680070 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-08-29 14:46:50.680079 | orchestrator | Friday 29 August 2025 14:46:42 +0000 (0:00:00.952) 0:00:13.986 ********* 2025-08-29 14:46:50.680087 | orchestrator | ok: [testbed-manager] 2025-08-29 14:46:50.680096 | orchestrator | 2025-08-29 14:46:50.680134 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-08-29 14:46:50.680144 | orchestrator | Friday 29 August 2025 14:46:43 +0000 (0:00:01.289) 0:00:15.276 ********* 2025-08-29 14:46:50.680153 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:46:50.680162 | orchestrator | 2025-08-29 14:46:50.680171 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-08-29 14:46:50.680180 | orchestrator | Friday 29 August 2025 14:46:44 +0000 (0:00:00.844) 0:00:16.120 ********* 2025-08-29 14:46:50.680189 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:46:50.680197 | orchestrator | 2025-08-29 14:46:50.680206 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-08-29 14:46:50.680237 | orchestrator | Friday 29 August 2025 14:46:44 +0000 (0:00:00.176) 0:00:16.296 ********* 2025-08-29 14:46:50.680246 | orchestrator | changed: [testbed-manager] 2025-08-29 14:46:50.680255 | orchestrator | 2025-08-29 14:46:50.680306 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-08-29 14:46:50.680316 | orchestrator | Friday 29 August 2025 14:46:45 +0000 (0:00:00.997) 0:00:17.294 ********* 2025-08-29 14:46:50.680326 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-08-29 14:46:50.680336 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-08-29 14:46:50.680347 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-08-29 14:46:50.680357 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-08-29 14:46:50.680367 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-08-29 14:46:50.680377 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-08-29 14:46:50.680386 | orchestrator | 2025-08-29 14:46:50.680396 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-08-29 14:46:50.680406 | orchestrator | Friday 29 August 2025 14:46:47 +0000 (0:00:02.206) 0:00:19.500 ********* 2025-08-29 14:46:50.680416 | orchestrator | ok: [testbed-manager] 2025-08-29 14:46:50.680426 | orchestrator | 2025-08-29 14:46:50.680435 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-08-29 14:46:50.680445 | orchestrator | Friday 29 August 2025 14:46:48 +0000 (0:00:01.363) 0:00:20.863 ********* 2025-08-29 14:46:50.680455 | orchestrator | changed: [testbed-manager] 2025-08-29 14:46:50.680463 | orchestrator | 2025-08-29 14:46:50.680472 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:46:50.680481 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:46:50.680490 | orchestrator | 2025-08-29 14:46:50.680498 | orchestrator | 2025-08-29 14:46:50.680507 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:46:50.680515 | orchestrator | Friday 29 August 2025 14:46:50 +0000 (0:00:01.423) 0:00:22.287 ********* 2025-08-29 14:46:50.680524 | orchestrator | =============================================================================== 2025-08-29 14:46:50.680533 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.15s 2025-08-29 14:46:50.680541 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.21s 2025-08-29 14:46:50.680550 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.42s 2025-08-29 14:46:50.680559 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.36s 2025-08-29 14:46:50.680582 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.30s 2025-08-29 14:46:50.680591 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.29s 2025-08-29 14:46:50.680600 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.14s 2025-08-29 14:46:50.680608 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 1.00s 2025-08-29 14:46:50.680617 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.95s 2025-08-29 14:46:50.680625 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.84s 2025-08-29 14:46:50.680634 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2025-08-29 14:46:50.680643 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.18s 2025-08-29 14:46:51.057720 | orchestrator | 2025-08-29 14:46:51.060321 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Aug 29 14:46:51 UTC 2025 2025-08-29 14:46:51.060384 | orchestrator | 2025-08-29 14:46:52.949368 | orchestrator | 2025-08-29 14:46:52 | INFO  | Collection nutshell is prepared for execution 2025-08-29 14:46:52.949491 | orchestrator | 2025-08-29 14:46:52 | INFO  | D [0] - dotfiles 2025-08-29 14:47:03.080672 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [0] - homer 2025-08-29 14:47:03.080776 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [0] - netdata 2025-08-29 14:47:03.080790 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [0] - openstackclient 2025-08-29 14:47:03.080801 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [0] - phpmyadmin 2025-08-29 14:47:03.081005 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [0] - common 2025-08-29 14:47:03.085412 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [1] -- loadbalancer 2025-08-29 14:47:03.085598 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [2] --- opensearch 2025-08-29 14:47:03.085831 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [2] --- mariadb-ng 2025-08-29 14:47:03.086092 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [3] ---- horizon 2025-08-29 14:47:03.086214 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [3] ---- keystone 2025-08-29 14:47:03.086354 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [4] ----- neutron 2025-08-29 14:47:03.086547 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [5] ------ wait-for-nova 2025-08-29 14:47:03.086872 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [5] ------ octavia 2025-08-29 14:47:03.088159 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [4] ----- barbican 2025-08-29 14:47:03.088252 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [4] ----- designate 2025-08-29 14:47:03.088713 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [4] ----- ironic 2025-08-29 14:47:03.088731 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [4] ----- placement 2025-08-29 14:47:03.089058 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [4] ----- magnum 2025-08-29 14:47:03.090002 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [1] -- openvswitch 2025-08-29 14:47:03.090135 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [2] --- ovn 2025-08-29 14:47:03.090580 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [1] -- memcached 2025-08-29 14:47:03.090800 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [1] -- redis 2025-08-29 14:47:03.091076 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [1] -- rabbitmq-ng 2025-08-29 14:47:03.091919 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [0] - kubernetes 2025-08-29 14:47:03.094508 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [1] -- kubeconfig 2025-08-29 14:47:03.094533 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [1] -- copy-kubeconfig 2025-08-29 14:47:03.095176 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [0] - ceph 2025-08-29 14:47:03.097115 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [1] -- ceph-pools 2025-08-29 14:47:03.097449 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [2] --- copy-ceph-keys 2025-08-29 14:47:03.097467 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [3] ---- cephclient 2025-08-29 14:47:03.097756 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-08-29 14:47:03.097773 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [4] ----- wait-for-keystone 2025-08-29 14:47:03.098111 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [5] ------ kolla-ceph-rgw 2025-08-29 14:47:03.098313 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [5] ------ glance 2025-08-29 14:47:03.098531 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [5] ------ cinder 2025-08-29 14:47:03.098736 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [5] ------ nova 2025-08-29 14:47:03.099217 | orchestrator | 2025-08-29 14:47:03 | INFO  | A [4] ----- prometheus 2025-08-29 14:47:03.099403 | orchestrator | 2025-08-29 14:47:03 | INFO  | D [5] ------ grafana 2025-08-29 14:47:03.281777 | orchestrator | 2025-08-29 14:47:03 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-08-29 14:47:03.281894 | orchestrator | 2025-08-29 14:47:03 | INFO  | Tasks are running in the background 2025-08-29 14:47:06.496896 | orchestrator | 2025-08-29 14:47:06 | INFO  | No task IDs specified, wait for all currently running tasks 2025-08-29 14:47:08.638666 | orchestrator | 2025-08-29 14:47:08 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:08.638860 | orchestrator | 2025-08-29 14:47:08 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:08.642144 | orchestrator | 2025-08-29 14:47:08 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:08.642170 | orchestrator | 2025-08-29 14:47:08 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:08.642182 | orchestrator | 2025-08-29 14:47:08 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:08.642193 | orchestrator | 2025-08-29 14:47:08 | INFO  | Task 25de8554-0e6e-4ec7-a581-d61afb721ff4 is in state STARTED 2025-08-29 14:47:08.642223 | orchestrator | 2025-08-29 14:47:08 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:08.642236 | orchestrator | 2025-08-29 14:47:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:11.684412 | orchestrator | 2025-08-29 14:47:11 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:11.685983 | orchestrator | 2025-08-29 14:47:11 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:11.714162 | orchestrator | 2025-08-29 14:47:11 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:11.714239 | orchestrator | 2025-08-29 14:47:11 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:11.714253 | orchestrator | 2025-08-29 14:47:11 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:11.714264 | orchestrator | 2025-08-29 14:47:11 | INFO  | Task 25de8554-0e6e-4ec7-a581-d61afb721ff4 is in state STARTED 2025-08-29 14:47:11.714324 | orchestrator | 2025-08-29 14:47:11 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:11.714336 | orchestrator | 2025-08-29 14:47:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:14.718984 | orchestrator | 2025-08-29 14:47:14 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:14.719970 | orchestrator | 2025-08-29 14:47:14 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:14.721219 | orchestrator | 2025-08-29 14:47:14 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:14.721907 | orchestrator | 2025-08-29 14:47:14 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:14.724465 | orchestrator | 2025-08-29 14:47:14 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:14.724884 | orchestrator | 2025-08-29 14:47:14 | INFO  | Task 25de8554-0e6e-4ec7-a581-d61afb721ff4 is in state STARTED 2025-08-29 14:47:14.725911 | orchestrator | 2025-08-29 14:47:14 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:14.725934 | orchestrator | 2025-08-29 14:47:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:17.774441 | orchestrator | 2025-08-29 14:47:17 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:17.774492 | orchestrator | 2025-08-29 14:47:17 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:17.776443 | orchestrator | 2025-08-29 14:47:17 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:17.779618 | orchestrator | 2025-08-29 14:47:17 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:17.780422 | orchestrator | 2025-08-29 14:47:17 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:17.782367 | orchestrator | 2025-08-29 14:47:17 | INFO  | Task 25de8554-0e6e-4ec7-a581-d61afb721ff4 is in state STARTED 2025-08-29 14:47:17.785711 | orchestrator | 2025-08-29 14:47:17 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:17.786350 | orchestrator | 2025-08-29 14:47:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:20.819987 | orchestrator | 2025-08-29 14:47:20 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:20.820679 | orchestrator | 2025-08-29 14:47:20 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:20.820842 | orchestrator | 2025-08-29 14:47:20 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:20.821781 | orchestrator | 2025-08-29 14:47:20 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:20.822814 | orchestrator | 2025-08-29 14:47:20 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:20.822838 | orchestrator | 2025-08-29 14:47:20 | INFO  | Task 25de8554-0e6e-4ec7-a581-d61afb721ff4 is in state STARTED 2025-08-29 14:47:20.823764 | orchestrator | 2025-08-29 14:47:20 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:20.823785 | orchestrator | 2025-08-29 14:47:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:23.981152 | orchestrator | 2025-08-29 14:47:23 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:23.983948 | orchestrator | 2025-08-29 14:47:23 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:23.984628 | orchestrator | 2025-08-29 14:47:23 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:23.987004 | orchestrator | 2025-08-29 14:47:23 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:23.990881 | orchestrator | 2025-08-29 14:47:23 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:23.991880 | orchestrator | 2025-08-29 14:47:23 | INFO  | Task 25de8554-0e6e-4ec7-a581-d61afb721ff4 is in state STARTED 2025-08-29 14:47:23.994353 | orchestrator | 2025-08-29 14:47:23 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:23.994388 | orchestrator | 2025-08-29 14:47:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:27.301468 | orchestrator | 2025-08-29 14:47:27 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:27.301540 | orchestrator | 2025-08-29 14:47:27 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:27.301546 | orchestrator | 2025-08-29 14:47:27 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:27.301550 | orchestrator | 2025-08-29 14:47:27 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:27.301585 | orchestrator | 2025-08-29 14:47:27 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:27.301591 | orchestrator | 2025-08-29 14:47:27 | INFO  | Task 25de8554-0e6e-4ec7-a581-d61afb721ff4 is in state STARTED 2025-08-29 14:47:27.301597 | orchestrator | 2025-08-29 14:47:27 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:27.301606 | orchestrator | 2025-08-29 14:47:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:30.555711 | orchestrator | 2025-08-29 14:47:30 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:30.555815 | orchestrator | 2025-08-29 14:47:30 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:30.555831 | orchestrator | 2025-08-29 14:47:30 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:30.555843 | orchestrator | 2025-08-29 14:47:30 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:30.555855 | orchestrator | 2025-08-29 14:47:30 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:30.555866 | orchestrator | 2025-08-29 14:47:30 | INFO  | Task 25de8554-0e6e-4ec7-a581-d61afb721ff4 is in state STARTED 2025-08-29 14:47:30.555877 | orchestrator | 2025-08-29 14:47:30 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:30.555889 | orchestrator | 2025-08-29 14:47:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:33.673187 | orchestrator | 2025-08-29 14:47:33.673390 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-08-29 14:47:33.673422 | orchestrator | 2025-08-29 14:47:33.673435 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-08-29 14:47:33.673447 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:01.051) 0:00:01.051 ********* 2025-08-29 14:47:33.673459 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:47:33.673471 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:47:33.673482 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:47:33.673493 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:47:33.673504 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:47:33.673514 | orchestrator | changed: [testbed-manager] 2025-08-29 14:47:33.673525 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:47:33.673536 | orchestrator | 2025-08-29 14:47:33.673547 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-08-29 14:47:33.673558 | orchestrator | Friday 29 August 2025 14:47:20 +0000 (0:00:04.467) 0:00:05.519 ********* 2025-08-29 14:47:33.673570 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 14:47:33.673582 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 14:47:33.673592 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 14:47:33.673604 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 14:47:33.673615 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 14:47:33.673626 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 14:47:33.673637 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 14:47:33.673648 | orchestrator | 2025-08-29 14:47:33.673658 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-08-29 14:47:33.673670 | orchestrator | Friday 29 August 2025 14:47:22 +0000 (0:00:01.746) 0:00:07.265 ********* 2025-08-29 14:47:33.673701 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:21.513463', 'end': '2025-08-29 14:47:21.523149', 'delta': '0:00:00.009686', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:33.673759 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:21.502366', 'end': '2025-08-29 14:47:21.514029', 'delta': '0:00:00.011663', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:33.673774 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:21.586315', 'end': '2025-08-29 14:47:21.706111', 'delta': '0:00:00.119796', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:33.673817 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:21.697340', 'end': '2025-08-29 14:47:21.705939', 'delta': '0:00:00.008599', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:33.673829 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:21.917848', 'end': '2025-08-29 14:47:21.926730', 'delta': '0:00:00.008882', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:33.674242 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:22.026629', 'end': '2025-08-29 14:47:22.036437', 'delta': '0:00:00.009808', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:33.674327 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:47:22.091634', 'end': '2025-08-29 14:47:22.102992', 'delta': '0:00:00.011358', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:47:33.674347 | orchestrator | 2025-08-29 14:47:33.674367 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-08-29 14:47:33.674385 | orchestrator | Friday 29 August 2025 14:47:24 +0000 (0:00:02.216) 0:00:09.482 ********* 2025-08-29 14:47:33.674406 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 14:47:33.674426 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 14:47:33.674445 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 14:47:33.674463 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 14:47:33.674483 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 14:47:33.674502 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 14:47:33.674521 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 14:47:33.674533 | orchestrator | 2025-08-29 14:47:33.674551 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-08-29 14:47:33.674570 | orchestrator | Friday 29 August 2025 14:47:26 +0000 (0:00:02.189) 0:00:11.672 ********* 2025-08-29 14:47:33.674589 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 14:47:33.674608 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 14:47:33.674626 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 14:47:33.674643 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 14:47:33.674661 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-08-29 14:47:33.674679 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 14:47:33.674697 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 14:47:33.674714 | orchestrator | 2025-08-29 14:47:33.674732 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:47:33.674771 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:33.674793 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:33.674804 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:33.674815 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:33.674836 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:33.674847 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:33.674857 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:47:33.674868 | orchestrator | 2025-08-29 14:47:33.674879 | orchestrator | 2025-08-29 14:47:33.674889 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:47:33.674900 | orchestrator | Friday 29 August 2025 14:47:31 +0000 (0:00:04.777) 0:00:16.450 ********* 2025-08-29 14:47:33.674911 | orchestrator | =============================================================================== 2025-08-29 14:47:33.674921 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.78s 2025-08-29 14:47:33.674938 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.47s 2025-08-29 14:47:33.674949 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.22s 2025-08-29 14:47:33.674960 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.19s 2025-08-29 14:47:33.674972 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.75s 2025-08-29 14:47:33.674982 | orchestrator | 2025-08-29 14:47:33 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:33.674993 | orchestrator | 2025-08-29 14:47:33 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:33.675003 | orchestrator | 2025-08-29 14:47:33 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:33.675014 | orchestrator | 2025-08-29 14:47:33 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:33.675025 | orchestrator | 2025-08-29 14:47:33 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:47:33.675035 | orchestrator | 2025-08-29 14:47:33 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:33.675046 | orchestrator | 2025-08-29 14:47:33 | INFO  | Task 25de8554-0e6e-4ec7-a581-d61afb721ff4 is in state SUCCESS 2025-08-29 14:47:33.675056 | orchestrator | 2025-08-29 14:47:33 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:33.675067 | orchestrator | 2025-08-29 14:47:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:36.654927 | orchestrator | 2025-08-29 14:47:36 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:36.657006 | orchestrator | 2025-08-29 14:47:36 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:36.657196 | orchestrator | 2025-08-29 14:47:36 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:36.659153 | orchestrator | 2025-08-29 14:47:36 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:36.660484 | orchestrator | 2025-08-29 14:47:36 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:47:36.665322 | orchestrator | 2025-08-29 14:47:36 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:36.675438 | orchestrator | 2025-08-29 14:47:36 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:36.675497 | orchestrator | 2025-08-29 14:47:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:39.780078 | orchestrator | 2025-08-29 14:47:39 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:39.780211 | orchestrator | 2025-08-29 14:47:39 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:39.781506 | orchestrator | 2025-08-29 14:47:39 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:39.782107 | orchestrator | 2025-08-29 14:47:39 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:39.784885 | orchestrator | 2025-08-29 14:47:39 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:47:39.785260 | orchestrator | 2025-08-29 14:47:39 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:39.785914 | orchestrator | 2025-08-29 14:47:39 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:39.785938 | orchestrator | 2025-08-29 14:47:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:43.087484 | orchestrator | 2025-08-29 14:47:42 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:43.087590 | orchestrator | 2025-08-29 14:47:42 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:43.087605 | orchestrator | 2025-08-29 14:47:42 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:43.087616 | orchestrator | 2025-08-29 14:47:42 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:43.087627 | orchestrator | 2025-08-29 14:47:42 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:47:43.087638 | orchestrator | 2025-08-29 14:47:42 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:43.087650 | orchestrator | 2025-08-29 14:47:42 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:43.087681 | orchestrator | 2025-08-29 14:47:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:46.072272 | orchestrator | 2025-08-29 14:47:45 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:46.072437 | orchestrator | 2025-08-29 14:47:45 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:46.072453 | orchestrator | 2025-08-29 14:47:45 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:46.072467 | orchestrator | 2025-08-29 14:47:45 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:46.072479 | orchestrator | 2025-08-29 14:47:45 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:47:46.072579 | orchestrator | 2025-08-29 14:47:45 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:46.072601 | orchestrator | 2025-08-29 14:47:45 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:46.072615 | orchestrator | 2025-08-29 14:47:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:49.153469 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:49.154868 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:49.157452 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:49.158370 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:49.159374 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:47:49.160446 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:49.162174 | orchestrator | 2025-08-29 14:47:49 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:49.162223 | orchestrator | 2025-08-29 14:47:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:52.199153 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:52.199494 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:52.200286 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:52.201694 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:52.202393 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:47:52.203072 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:52.203833 | orchestrator | 2025-08-29 14:47:52 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:52.203872 | orchestrator | 2025-08-29 14:47:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:55.789504 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:55.789586 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:55.789598 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:55.789610 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:55.789621 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:47:55.789631 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:55.789642 | orchestrator | 2025-08-29 14:47:55 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:55.789653 | orchestrator | 2025-08-29 14:47:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:47:58.876014 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:47:58.876670 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:47:58.887371 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:47:58.887447 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:47:58.887468 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:47:58.902755 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:47:58.908003 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:47:58.908140 | orchestrator | 2025-08-29 14:47:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:02.010451 | orchestrator | 2025-08-29 14:48:01 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:48:02.010570 | orchestrator | 2025-08-29 14:48:01 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state STARTED 2025-08-29 14:48:02.010586 | orchestrator | 2025-08-29 14:48:02 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:02.010597 | orchestrator | 2025-08-29 14:48:02 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:48:02.010608 | orchestrator | 2025-08-29 14:48:02 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:48:02.010619 | orchestrator | 2025-08-29 14:48:02 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:02.010630 | orchestrator | 2025-08-29 14:48:02 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:02.010641 | orchestrator | 2025-08-29 14:48:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:05.090424 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:48:05.090516 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task ae8a663e-596e-4723-b2e1-88c3955c60e6 is in state SUCCESS 2025-08-29 14:48:05.090529 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:05.090541 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state STARTED 2025-08-29 14:48:05.090551 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:48:05.090562 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:05.090572 | orchestrator | 2025-08-29 14:48:05 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:05.090583 | orchestrator | 2025-08-29 14:48:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:08.116357 | orchestrator | 2025-08-29 14:48:08 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:48:08.117016 | orchestrator | 2025-08-29 14:48:08 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:08.117415 | orchestrator | 2025-08-29 14:48:08 | INFO  | Task 5a3ad7ae-c9f0-4775-8a21-456863cf60c6 is in state SUCCESS 2025-08-29 14:48:08.118149 | orchestrator | 2025-08-29 14:48:08 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:48:08.118946 | orchestrator | 2025-08-29 14:48:08 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:08.119672 | orchestrator | 2025-08-29 14:48:08 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:08.119889 | orchestrator | 2025-08-29 14:48:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:11.167443 | orchestrator | 2025-08-29 14:48:11 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:48:11.167645 | orchestrator | 2025-08-29 14:48:11 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:11.168312 | orchestrator | 2025-08-29 14:48:11 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:48:11.169157 | orchestrator | 2025-08-29 14:48:11 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:11.170160 | orchestrator | 2025-08-29 14:48:11 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:11.171100 | orchestrator | 2025-08-29 14:48:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:14.248554 | orchestrator | 2025-08-29 14:48:14 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:48:14.248651 | orchestrator | 2025-08-29 14:48:14 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:14.248666 | orchestrator | 2025-08-29 14:48:14 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:48:14.254884 | orchestrator | 2025-08-29 14:48:14 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:14.257520 | orchestrator | 2025-08-29 14:48:14 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:14.258086 | orchestrator | 2025-08-29 14:48:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:17.373074 | orchestrator | 2025-08-29 14:48:17 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:48:17.376580 | orchestrator | 2025-08-29 14:48:17 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:17.383080 | orchestrator | 2025-08-29 14:48:17 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:48:17.389006 | orchestrator | 2025-08-29 14:48:17 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:17.394582 | orchestrator | 2025-08-29 14:48:17 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:17.394641 | orchestrator | 2025-08-29 14:48:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:20.452609 | orchestrator | 2025-08-29 14:48:20 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:48:20.454106 | orchestrator | 2025-08-29 14:48:20 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:20.455234 | orchestrator | 2025-08-29 14:48:20 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:48:20.457800 | orchestrator | 2025-08-29 14:48:20 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:20.460902 | orchestrator | 2025-08-29 14:48:20 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:20.460998 | orchestrator | 2025-08-29 14:48:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:23.504524 | orchestrator | 2025-08-29 14:48:23 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:48:23.505809 | orchestrator | 2025-08-29 14:48:23 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:23.507206 | orchestrator | 2025-08-29 14:48:23 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:48:23.509812 | orchestrator | 2025-08-29 14:48:23 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:23.510807 | orchestrator | 2025-08-29 14:48:23 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:23.510858 | orchestrator | 2025-08-29 14:48:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:26.552005 | orchestrator | 2025-08-29 14:48:26 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:48:26.553223 | orchestrator | 2025-08-29 14:48:26 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:26.554151 | orchestrator | 2025-08-29 14:48:26 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:48:26.556360 | orchestrator | 2025-08-29 14:48:26 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:26.557656 | orchestrator | 2025-08-29 14:48:26 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:26.557679 | orchestrator | 2025-08-29 14:48:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:29.654700 | orchestrator | 2025-08-29 14:48:29 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:48:29.656262 | orchestrator | 2025-08-29 14:48:29 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:29.658113 | orchestrator | 2025-08-29 14:48:29 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state STARTED 2025-08-29 14:48:29.659874 | orchestrator | 2025-08-29 14:48:29 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:29.662483 | orchestrator | 2025-08-29 14:48:29 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:29.662512 | orchestrator | 2025-08-29 14:48:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:32.794579 | orchestrator | 2025-08-29 14:48:32 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state STARTED 2025-08-29 14:48:32.794699 | orchestrator | 2025-08-29 14:48:32 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:32.796194 | orchestrator | 2025-08-29 14:48:32 | INFO  | Task 54759aeb-6126-4552-aa62-3db8af76cca0 is in state SUCCESS 2025-08-29 14:48:32.796598 | orchestrator | 2025-08-29 14:48:32.796631 | orchestrator | 2025-08-29 14:48:32.796650 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-08-29 14:48:32.796668 | orchestrator | 2025-08-29 14:48:32.796686 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-08-29 14:48:32.796705 | orchestrator | Friday 29 August 2025 14:47:17 +0000 (0:00:00.570) 0:00:00.570 ********* 2025-08-29 14:48:32.796722 | orchestrator | ok: [testbed-manager] => { 2025-08-29 14:48:32.796741 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-08-29 14:48:32.796754 | orchestrator | } 2025-08-29 14:48:32.796764 | orchestrator | 2025-08-29 14:48:32.796773 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-08-29 14:48:32.796805 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:00.374) 0:00:00.944 ********* 2025-08-29 14:48:32.796815 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:32.796826 | orchestrator | 2025-08-29 14:48:32.796836 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-08-29 14:48:32.796846 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:01.482) 0:00:02.427 ********* 2025-08-29 14:48:32.796856 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-08-29 14:48:32.796866 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-08-29 14:48:32.796876 | orchestrator | 2025-08-29 14:48:32.796886 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-08-29 14:48:32.796895 | orchestrator | Friday 29 August 2025 14:47:21 +0000 (0:00:01.398) 0:00:03.825 ********* 2025-08-29 14:48:32.796905 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:32.796915 | orchestrator | 2025-08-29 14:48:32.796924 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-08-29 14:48:32.796934 | orchestrator | Friday 29 August 2025 14:47:24 +0000 (0:00:03.568) 0:00:07.393 ********* 2025-08-29 14:48:32.796944 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:32.796954 | orchestrator | 2025-08-29 14:48:32.796964 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-08-29 14:48:32.796974 | orchestrator | Friday 29 August 2025 14:47:28 +0000 (0:00:03.341) 0:00:10.735 ********* 2025-08-29 14:48:32.796984 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-08-29 14:48:32.796994 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:32.797032 | orchestrator | 2025-08-29 14:48:32.797043 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-08-29 14:48:32.797052 | orchestrator | Friday 29 August 2025 14:47:59 +0000 (0:00:31.139) 0:00:41.874 ********* 2025-08-29 14:48:32.797061 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:32.797071 | orchestrator | 2025-08-29 14:48:32.797080 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:48:32.797090 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:32.797101 | orchestrator | 2025-08-29 14:48:32.797110 | orchestrator | 2025-08-29 14:48:32.797120 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:48:32.797129 | orchestrator | Friday 29 August 2025 14:48:01 +0000 (0:00:02.557) 0:00:44.431 ********* 2025-08-29 14:48:32.797139 | orchestrator | =============================================================================== 2025-08-29 14:48:32.797148 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 31.14s 2025-08-29 14:48:32.797158 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.57s 2025-08-29 14:48:32.797169 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 3.34s 2025-08-29 14:48:32.797223 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.56s 2025-08-29 14:48:32.797235 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.48s 2025-08-29 14:48:32.797246 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.40s 2025-08-29 14:48:32.797257 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.37s 2025-08-29 14:48:32.797270 | orchestrator | 2025-08-29 14:48:32.797287 | orchestrator | 2025-08-29 14:48:32.797326 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-08-29 14:48:32.797345 | orchestrator | 2025-08-29 14:48:32.797362 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-08-29 14:48:32.797380 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:01.028) 0:00:01.028 ********* 2025-08-29 14:48:32.797398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-08-29 14:48:32.797416 | orchestrator | 2025-08-29 14:48:32.797429 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-08-29 14:48:32.797440 | orchestrator | Friday 29 August 2025 14:47:17 +0000 (0:00:01.025) 0:00:02.053 ********* 2025-08-29 14:48:32.797451 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-08-29 14:48:32.797462 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-08-29 14:48:32.797472 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-08-29 14:48:32.797483 | orchestrator | 2025-08-29 14:48:32.797494 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-08-29 14:48:32.797510 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:02.384) 0:00:04.438 ********* 2025-08-29 14:48:32.797521 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:32.797532 | orchestrator | 2025-08-29 14:48:32.797542 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-08-29 14:48:32.797551 | orchestrator | Friday 29 August 2025 14:47:21 +0000 (0:00:01.748) 0:00:06.186 ********* 2025-08-29 14:48:32.797573 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-08-29 14:48:32.797583 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:32.797593 | orchestrator | 2025-08-29 14:48:32.797602 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-08-29 14:48:32.797612 | orchestrator | Friday 29 August 2025 14:47:56 +0000 (0:00:35.181) 0:00:41.368 ********* 2025-08-29 14:48:32.797621 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:32.797631 | orchestrator | 2025-08-29 14:48:32.797650 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-08-29 14:48:32.797660 | orchestrator | Friday 29 August 2025 14:47:57 +0000 (0:00:01.280) 0:00:42.648 ********* 2025-08-29 14:48:32.797670 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:32.797679 | orchestrator | 2025-08-29 14:48:32.797689 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-08-29 14:48:32.797698 | orchestrator | Friday 29 August 2025 14:47:59 +0000 (0:00:02.074) 0:00:44.723 ********* 2025-08-29 14:48:32.797708 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:32.797717 | orchestrator | 2025-08-29 14:48:32.797727 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-08-29 14:48:32.797736 | orchestrator | Friday 29 August 2025 14:48:02 +0000 (0:00:02.569) 0:00:47.292 ********* 2025-08-29 14:48:32.797745 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:32.797755 | orchestrator | 2025-08-29 14:48:32.797764 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-08-29 14:48:32.797773 | orchestrator | Friday 29 August 2025 14:48:03 +0000 (0:00:00.891) 0:00:48.183 ********* 2025-08-29 14:48:32.797783 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:32.797792 | orchestrator | 2025-08-29 14:48:32.797802 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-08-29 14:48:32.797811 | orchestrator | Friday 29 August 2025 14:48:04 +0000 (0:00:01.072) 0:00:49.256 ********* 2025-08-29 14:48:32.797820 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:32.797830 | orchestrator | 2025-08-29 14:48:32.797839 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:48:32.797849 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:32.797858 | orchestrator | 2025-08-29 14:48:32.797868 | orchestrator | 2025-08-29 14:48:32.797877 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:48:32.797887 | orchestrator | Friday 29 August 2025 14:48:04 +0000 (0:00:00.560) 0:00:49.817 ********* 2025-08-29 14:48:32.797899 | orchestrator | =============================================================================== 2025-08-29 14:48:32.797916 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.18s 2025-08-29 14:48:32.797932 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.57s 2025-08-29 14:48:32.797948 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.38s 2025-08-29 14:48:32.797963 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 2.07s 2025-08-29 14:48:32.797979 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.75s 2025-08-29 14:48:32.797996 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.28s 2025-08-29 14:48:32.798066 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.07s 2025-08-29 14:48:32.798089 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.03s 2025-08-29 14:48:32.798105 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.89s 2025-08-29 14:48:32.798122 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.56s 2025-08-29 14:48:32.798140 | orchestrator | 2025-08-29 14:48:32.798156 | orchestrator | 2025-08-29 14:48:32.798173 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-08-29 14:48:32.798189 | orchestrator | 2025-08-29 14:48:32.798205 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-08-29 14:48:32.798220 | orchestrator | Friday 29 August 2025 14:47:38 +0000 (0:00:00.255) 0:00:00.255 ********* 2025-08-29 14:48:32.798236 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:32.798252 | orchestrator | 2025-08-29 14:48:32.798268 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-08-29 14:48:32.798285 | orchestrator | Friday 29 August 2025 14:47:39 +0000 (0:00:01.074) 0:00:01.330 ********* 2025-08-29 14:48:32.798334 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-08-29 14:48:32.798353 | orchestrator | 2025-08-29 14:48:32.798368 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-08-29 14:48:32.798384 | orchestrator | Friday 29 August 2025 14:47:40 +0000 (0:00:00.555) 0:00:01.885 ********* 2025-08-29 14:48:32.798400 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:32.798418 | orchestrator | 2025-08-29 14:48:32.798435 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-08-29 14:48:32.798451 | orchestrator | Friday 29 August 2025 14:47:41 +0000 (0:00:01.239) 0:00:03.125 ********* 2025-08-29 14:48:32.798468 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-08-29 14:48:32.798484 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:32.798500 | orchestrator | 2025-08-29 14:48:32.798517 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-08-29 14:48:32.798534 | orchestrator | Friday 29 August 2025 14:48:28 +0000 (0:00:46.656) 0:00:49.781 ********* 2025-08-29 14:48:32.798559 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:32.798578 | orchestrator | 2025-08-29 14:48:32.798595 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:48:32.798613 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:32.798630 | orchestrator | 2025-08-29 14:48:32.798648 | orchestrator | 2025-08-29 14:48:32.798665 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:48:32.798694 | orchestrator | Friday 29 August 2025 14:48:32 +0000 (0:00:04.127) 0:00:53.908 ********* 2025-08-29 14:48:32.798711 | orchestrator | =============================================================================== 2025-08-29 14:48:32.798728 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 46.66s 2025-08-29 14:48:32.798745 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.13s 2025-08-29 14:48:32.798761 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.24s 2025-08-29 14:48:32.798776 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.07s 2025-08-29 14:48:32.798792 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.56s 2025-08-29 14:48:32.798808 | orchestrator | 2025-08-29 14:48:32 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:32.799348 | orchestrator | 2025-08-29 14:48:32 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:32.799380 | orchestrator | 2025-08-29 14:48:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:35.838947 | orchestrator | 2025-08-29 14:48:35.839013 | orchestrator | 2025-08-29 14:48:35.839020 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:48:35.839025 | orchestrator | 2025-08-29 14:48:35.839029 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:48:35.839033 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:00.360) 0:00:00.360 ********* 2025-08-29 14:48:35.839039 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-08-29 14:48:35.839046 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-08-29 14:48:35.839052 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-08-29 14:48:35.839059 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-08-29 14:48:35.839065 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-08-29 14:48:35.839072 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-08-29 14:48:35.839078 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-08-29 14:48:35.839084 | orchestrator | 2025-08-29 14:48:35.839091 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-08-29 14:48:35.839097 | orchestrator | 2025-08-29 14:48:35.839121 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-08-29 14:48:35.839128 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:01.224) 0:00:01.585 ********* 2025-08-29 14:48:35.839148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:48:35.839159 | orchestrator | 2025-08-29 14:48:35.839165 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-08-29 14:48:35.839171 | orchestrator | Friday 29 August 2025 14:47:22 +0000 (0:00:02.471) 0:00:04.056 ********* 2025-08-29 14:48:35.839178 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:48:35.839185 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:48:35.839191 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:48:35.839197 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:35.839204 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:35.839210 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:35.839216 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:35.839223 | orchestrator | 2025-08-29 14:48:35.839229 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-08-29 14:48:35.839235 | orchestrator | Friday 29 August 2025 14:47:24 +0000 (0:00:02.180) 0:00:06.237 ********* 2025-08-29 14:48:35.839241 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:48:35.839248 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:48:35.839254 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:35.839260 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:35.839266 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:35.839272 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:48:35.839278 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:35.839284 | orchestrator | 2025-08-29 14:48:35.839290 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-08-29 14:48:35.839296 | orchestrator | Friday 29 August 2025 14:47:28 +0000 (0:00:04.505) 0:00:10.743 ********* 2025-08-29 14:48:35.839339 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:48:35.839346 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:48:35.839353 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:48:35.839359 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:48:35.839365 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:48:35.839371 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:48:35.839377 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:35.839383 | orchestrator | 2025-08-29 14:48:35.839389 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-08-29 14:48:35.839396 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:04.607) 0:00:15.350 ********* 2025-08-29 14:48:35.839402 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:48:35.839408 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:48:35.839414 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:48:35.839420 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:48:35.839426 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:48:35.839432 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:48:35.839438 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:35.839445 | orchestrator | 2025-08-29 14:48:35.839457 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-08-29 14:48:35.839463 | orchestrator | Friday 29 August 2025 14:47:45 +0000 (0:00:12.259) 0:00:27.609 ********* 2025-08-29 14:48:35.839469 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:48:35.839475 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:48:35.839482 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:48:35.839488 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:48:35.839492 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:48:35.839496 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:48:35.839500 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:35.839503 | orchestrator | 2025-08-29 14:48:35.839508 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-08-29 14:48:35.839516 | orchestrator | Friday 29 August 2025 14:48:11 +0000 (0:00:25.437) 0:00:53.047 ********* 2025-08-29 14:48:35.839522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:48:35.839528 | orchestrator | 2025-08-29 14:48:35.839532 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-08-29 14:48:35.839537 | orchestrator | Friday 29 August 2025 14:48:12 +0000 (0:00:01.354) 0:00:54.402 ********* 2025-08-29 14:48:35.839541 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-08-29 14:48:35.839546 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-08-29 14:48:35.839550 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-08-29 14:48:35.839554 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-08-29 14:48:35.839569 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-08-29 14:48:35.839574 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-08-29 14:48:35.839578 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-08-29 14:48:35.839582 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-08-29 14:48:35.839586 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-08-29 14:48:35.839590 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-08-29 14:48:35.839594 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-08-29 14:48:35.839598 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-08-29 14:48:35.839603 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-08-29 14:48:35.839607 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-08-29 14:48:35.839611 | orchestrator | 2025-08-29 14:48:35.839615 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-08-29 14:48:35.839620 | orchestrator | Friday 29 August 2025 14:48:19 +0000 (0:00:06.568) 0:01:00.970 ********* 2025-08-29 14:48:35.839624 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:35.839628 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:48:35.839633 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:48:35.839637 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:48:35.839641 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:35.839645 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:35.839649 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:35.839654 | orchestrator | 2025-08-29 14:48:35.839658 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-08-29 14:48:35.839662 | orchestrator | Friday 29 August 2025 14:48:20 +0000 (0:00:01.606) 0:01:02.576 ********* 2025-08-29 14:48:35.839669 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:35.839675 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:48:35.839681 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:48:35.839687 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:48:35.839694 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:48:35.839700 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:48:35.839706 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:48:35.839712 | orchestrator | 2025-08-29 14:48:35.839718 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-08-29 14:48:35.839724 | orchestrator | Friday 29 August 2025 14:48:22 +0000 (0:00:02.216) 0:01:04.793 ********* 2025-08-29 14:48:35.839730 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:48:35.839736 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:35.839742 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:48:35.839748 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:48:35.839754 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:35.839760 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:35.839766 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:35.839772 | orchestrator | 2025-08-29 14:48:35.839778 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-08-29 14:48:35.839791 | orchestrator | Friday 29 August 2025 14:48:24 +0000 (0:00:01.715) 0:01:06.509 ********* 2025-08-29 14:48:35.839797 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:48:35.839804 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:48:35.839810 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:35.839816 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:48:35.839822 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:35.839828 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:35.839834 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:35.839841 | orchestrator | 2025-08-29 14:48:35.839847 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-08-29 14:48:35.839854 | orchestrator | Friday 29 August 2025 14:48:27 +0000 (0:00:02.648) 0:01:09.157 ********* 2025-08-29 14:48:35.839861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-08-29 14:48:35.839869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:48:35.839877 | orchestrator | 2025-08-29 14:48:35.839883 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-08-29 14:48:35.839889 | orchestrator | Friday 29 August 2025 14:48:29 +0000 (0:00:02.152) 0:01:11.309 ********* 2025-08-29 14:48:35.839896 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:35.839905 | orchestrator | 2025-08-29 14:48:35.839911 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-08-29 14:48:35.839917 | orchestrator | Friday 29 August 2025 14:48:31 +0000 (0:00:01.955) 0:01:13.265 ********* 2025-08-29 14:48:35.839922 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:48:35.839929 | orchestrator | changed: [testbed-manager] 2025-08-29 14:48:35.839934 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:48:35.839941 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:48:35.839947 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:48:35.839953 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:48:35.839960 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:48:35.839967 | orchestrator | 2025-08-29 14:48:35.839971 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:48:35.839975 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:35.839980 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:35.839984 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:35.839988 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:35.839996 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:35.840000 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:35.840004 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:48:35.840007 | orchestrator | 2025-08-29 14:48:35.840011 | orchestrator | 2025-08-29 14:48:35.840015 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:48:35.840019 | orchestrator | Friday 29 August 2025 14:48:34 +0000 (0:00:02.863) 0:01:16.128 ********* 2025-08-29 14:48:35.840023 | orchestrator | =============================================================================== 2025-08-29 14:48:35.840026 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 25.44s 2025-08-29 14:48:35.840034 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.26s 2025-08-29 14:48:35.840038 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.57s 2025-08-29 14:48:35.840041 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 4.61s 2025-08-29 14:48:35.840045 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.51s 2025-08-29 14:48:35.840049 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.86s 2025-08-29 14:48:35.840053 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.65s 2025-08-29 14:48:35.840056 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.47s 2025-08-29 14:48:35.840060 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.22s 2025-08-29 14:48:35.840064 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.18s 2025-08-29 14:48:35.840067 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.15s 2025-08-29 14:48:35.840073 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.96s 2025-08-29 14:48:35.840079 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.72s 2025-08-29 14:48:35.840086 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.61s 2025-08-29 14:48:35.840092 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.35s 2025-08-29 14:48:35.840098 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.22s 2025-08-29 14:48:35.840105 | orchestrator | 2025-08-29 14:48:35 | INFO  | Task e3fd924c-ea7b-472a-97f4-78e0929cf360 is in state SUCCESS 2025-08-29 14:48:35.841000 | orchestrator | 2025-08-29 14:48:35 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:35.842248 | orchestrator | 2025-08-29 14:48:35 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:35.843340 | orchestrator | 2025-08-29 14:48:35 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:35.843362 | orchestrator | 2025-08-29 14:48:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:38.884378 | orchestrator | 2025-08-29 14:48:38 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:38.884444 | orchestrator | 2025-08-29 14:48:38 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:38.884453 | orchestrator | 2025-08-29 14:48:38 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:38.884472 | orchestrator | 2025-08-29 14:48:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:41.938275 | orchestrator | 2025-08-29 14:48:41 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:41.938383 | orchestrator | 2025-08-29 14:48:41 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:41.939495 | orchestrator | 2025-08-29 14:48:41 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:41.939524 | orchestrator | 2025-08-29 14:48:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:44.998067 | orchestrator | 2025-08-29 14:48:44 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:45.000637 | orchestrator | 2025-08-29 14:48:45 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:45.004434 | orchestrator | 2025-08-29 14:48:45 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:45.004575 | orchestrator | 2025-08-29 14:48:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:48.065995 | orchestrator | 2025-08-29 14:48:48 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:48.067300 | orchestrator | 2025-08-29 14:48:48 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:48.067875 | orchestrator | 2025-08-29 14:48:48 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:48.067912 | orchestrator | 2025-08-29 14:48:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:51.109002 | orchestrator | 2025-08-29 14:48:51 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:51.109942 | orchestrator | 2025-08-29 14:48:51 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:51.112117 | orchestrator | 2025-08-29 14:48:51 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:51.112202 | orchestrator | 2025-08-29 14:48:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:54.168634 | orchestrator | 2025-08-29 14:48:54 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:54.171139 | orchestrator | 2025-08-29 14:48:54 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:54.173823 | orchestrator | 2025-08-29 14:48:54 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:54.174094 | orchestrator | 2025-08-29 14:48:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:48:57.241359 | orchestrator | 2025-08-29 14:48:57 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:48:57.246105 | orchestrator | 2025-08-29 14:48:57 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:48:57.247638 | orchestrator | 2025-08-29 14:48:57 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:48:57.247670 | orchestrator | 2025-08-29 14:48:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:00.292386 | orchestrator | 2025-08-29 14:49:00 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:00.295214 | orchestrator | 2025-08-29 14:49:00 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:00.295263 | orchestrator | 2025-08-29 14:49:00 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:00.295275 | orchestrator | 2025-08-29 14:49:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:03.356303 | orchestrator | 2025-08-29 14:49:03 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:03.356514 | orchestrator | 2025-08-29 14:49:03 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:03.358156 | orchestrator | 2025-08-29 14:49:03 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:03.358192 | orchestrator | 2025-08-29 14:49:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:06.404279 | orchestrator | 2025-08-29 14:49:06 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:06.405970 | orchestrator | 2025-08-29 14:49:06 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:06.408017 | orchestrator | 2025-08-29 14:49:06 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:06.408126 | orchestrator | 2025-08-29 14:49:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:09.454959 | orchestrator | 2025-08-29 14:49:09 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:09.456661 | orchestrator | 2025-08-29 14:49:09 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:09.458180 | orchestrator | 2025-08-29 14:49:09 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:09.458213 | orchestrator | 2025-08-29 14:49:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:12.505695 | orchestrator | 2025-08-29 14:49:12 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:12.506981 | orchestrator | 2025-08-29 14:49:12 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:12.508570 | orchestrator | 2025-08-29 14:49:12 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:12.508594 | orchestrator | 2025-08-29 14:49:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:15.558846 | orchestrator | 2025-08-29 14:49:15 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:15.560536 | orchestrator | 2025-08-29 14:49:15 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:15.562478 | orchestrator | 2025-08-29 14:49:15 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:15.562555 | orchestrator | 2025-08-29 14:49:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:18.612024 | orchestrator | 2025-08-29 14:49:18 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:18.612879 | orchestrator | 2025-08-29 14:49:18 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:18.613798 | orchestrator | 2025-08-29 14:49:18 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:18.613823 | orchestrator | 2025-08-29 14:49:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:21.668694 | orchestrator | 2025-08-29 14:49:21 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:21.670259 | orchestrator | 2025-08-29 14:49:21 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:21.672095 | orchestrator | 2025-08-29 14:49:21 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:21.672131 | orchestrator | 2025-08-29 14:49:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:24.730982 | orchestrator | 2025-08-29 14:49:24 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:24.737059 | orchestrator | 2025-08-29 14:49:24 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:24.739448 | orchestrator | 2025-08-29 14:49:24 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:24.739482 | orchestrator | 2025-08-29 14:49:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:27.785700 | orchestrator | 2025-08-29 14:49:27 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:27.789294 | orchestrator | 2025-08-29 14:49:27 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:27.790941 | orchestrator | 2025-08-29 14:49:27 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:27.790973 | orchestrator | 2025-08-29 14:49:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:30.839374 | orchestrator | 2025-08-29 14:49:30 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:30.841830 | orchestrator | 2025-08-29 14:49:30 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:30.844422 | orchestrator | 2025-08-29 14:49:30 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:30.844459 | orchestrator | 2025-08-29 14:49:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:33.889592 | orchestrator | 2025-08-29 14:49:33 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:33.891766 | orchestrator | 2025-08-29 14:49:33 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:33.894162 | orchestrator | 2025-08-29 14:49:33 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:33.894527 | orchestrator | 2025-08-29 14:49:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:36.936391 | orchestrator | 2025-08-29 14:49:36 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:36.938838 | orchestrator | 2025-08-29 14:49:36 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:36.941217 | orchestrator | 2025-08-29 14:49:36 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:36.941775 | orchestrator | 2025-08-29 14:49:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:39.990206 | orchestrator | 2025-08-29 14:49:39 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:39.991465 | orchestrator | 2025-08-29 14:49:39 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:39.993218 | orchestrator | 2025-08-29 14:49:39 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:39.993258 | orchestrator | 2025-08-29 14:49:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:43.057842 | orchestrator | 2025-08-29 14:49:43 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:43.059973 | orchestrator | 2025-08-29 14:49:43 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:43.062239 | orchestrator | 2025-08-29 14:49:43 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:43.062392 | orchestrator | 2025-08-29 14:49:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:46.111095 | orchestrator | 2025-08-29 14:49:46 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:46.114730 | orchestrator | 2025-08-29 14:49:46 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:46.121222 | orchestrator | 2025-08-29 14:49:46 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:46.121295 | orchestrator | 2025-08-29 14:49:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:49.167197 | orchestrator | 2025-08-29 14:49:49 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:49.169890 | orchestrator | 2025-08-29 14:49:49 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:49.172242 | orchestrator | 2025-08-29 14:49:49 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:49.172511 | orchestrator | 2025-08-29 14:49:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:52.221164 | orchestrator | 2025-08-29 14:49:52 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:52.223215 | orchestrator | 2025-08-29 14:49:52 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:52.225859 | orchestrator | 2025-08-29 14:49:52 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:52.225968 | orchestrator | 2025-08-29 14:49:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:55.276852 | orchestrator | 2025-08-29 14:49:55 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:55.277751 | orchestrator | 2025-08-29 14:49:55 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:55.278702 | orchestrator | 2025-08-29 14:49:55 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:55.278982 | orchestrator | 2025-08-29 14:49:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:58.324036 | orchestrator | 2025-08-29 14:49:58 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:49:58.324953 | orchestrator | 2025-08-29 14:49:58 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:49:58.326452 | orchestrator | 2025-08-29 14:49:58 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:49:58.326482 | orchestrator | 2025-08-29 14:49:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:01.374910 | orchestrator | 2025-08-29 14:50:01 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:01.379022 | orchestrator | 2025-08-29 14:50:01 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:01.380837 | orchestrator | 2025-08-29 14:50:01 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:50:01.380896 | orchestrator | 2025-08-29 14:50:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:04.443489 | orchestrator | 2025-08-29 14:50:04 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:04.445741 | orchestrator | 2025-08-29 14:50:04 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:04.449267 | orchestrator | 2025-08-29 14:50:04 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:50:04.449373 | orchestrator | 2025-08-29 14:50:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:07.503386 | orchestrator | 2025-08-29 14:50:07 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:07.505826 | orchestrator | 2025-08-29 14:50:07 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:07.506921 | orchestrator | 2025-08-29 14:50:07 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:50:07.507199 | orchestrator | 2025-08-29 14:50:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:10.548643 | orchestrator | 2025-08-29 14:50:10 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:10.549295 | orchestrator | 2025-08-29 14:50:10 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:10.550242 | orchestrator | 2025-08-29 14:50:10 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:50:10.550281 | orchestrator | 2025-08-29 14:50:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:13.603599 | orchestrator | 2025-08-29 14:50:13 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:13.604379 | orchestrator | 2025-08-29 14:50:13 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:13.606254 | orchestrator | 2025-08-29 14:50:13 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:50:13.606295 | orchestrator | 2025-08-29 14:50:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:16.658568 | orchestrator | 2025-08-29 14:50:16 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:16.658874 | orchestrator | 2025-08-29 14:50:16 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:16.661196 | orchestrator | 2025-08-29 14:50:16 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:50:16.661228 | orchestrator | 2025-08-29 14:50:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:19.725075 | orchestrator | 2025-08-29 14:50:19 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:19.725451 | orchestrator | 2025-08-29 14:50:19 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:19.727546 | orchestrator | 2025-08-29 14:50:19 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:50:19.727584 | orchestrator | 2025-08-29 14:50:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:22.777509 | orchestrator | 2025-08-29 14:50:22 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:22.780443 | orchestrator | 2025-08-29 14:50:22 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:22.782857 | orchestrator | 2025-08-29 14:50:22 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:50:22.782923 | orchestrator | 2025-08-29 14:50:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:25.836128 | orchestrator | 2025-08-29 14:50:25 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:25.839226 | orchestrator | 2025-08-29 14:50:25 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:25.839268 | orchestrator | 2025-08-29 14:50:25 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:50:25.839279 | orchestrator | 2025-08-29 14:50:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:28.897650 | orchestrator | 2025-08-29 14:50:28 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:28.899651 | orchestrator | 2025-08-29 14:50:28 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:28.899674 | orchestrator | 2025-08-29 14:50:28 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:50:28.899695 | orchestrator | 2025-08-29 14:50:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:31.964451 | orchestrator | 2025-08-29 14:50:31 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:31.966217 | orchestrator | 2025-08-29 14:50:31 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:31.966521 | orchestrator | 2025-08-29 14:50:31 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state STARTED 2025-08-29 14:50:31.966550 | orchestrator | 2025-08-29 14:50:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:35.016072 | orchestrator | 2025-08-29 14:50:35 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:35.016224 | orchestrator | 2025-08-29 14:50:35 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:35.021228 | orchestrator | 2025-08-29 14:50:35 | INFO  | Task 1db6b6b6-79de-4dfa-8cda-d032e3829c9e is in state SUCCESS 2025-08-29 14:50:35.025534 | orchestrator | 2025-08-29 14:50:35.025598 | orchestrator | 2025-08-29 14:50:35.025610 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-08-29 14:50:35.025623 | orchestrator | 2025-08-29 14:50:35.025635 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 14:50:35.025647 | orchestrator | Friday 29 August 2025 14:47:08 +0000 (0:00:00.230) 0:00:00.230 ********* 2025-08-29 14:50:35.025660 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:50:35.025673 | orchestrator | 2025-08-29 14:50:35.025684 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-08-29 14:50:35.025695 | orchestrator | Friday 29 August 2025 14:47:09 +0000 (0:00:01.089) 0:00:01.320 ********* 2025-08-29 14:50:35.025706 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:35.025717 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:35.025744 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:35.025756 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:35.025779 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:35.025790 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:35.025801 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:35.025812 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:35.025822 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:35.025833 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:35.025844 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:35.025855 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:35.025866 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:50:35.025876 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:35.025887 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:35.025898 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:35.025909 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:50:35.025920 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:35.025931 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:35.025941 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:35.025952 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:50:35.025963 | orchestrator | 2025-08-29 14:50:35.025974 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 14:50:35.025984 | orchestrator | Friday 29 August 2025 14:47:13 +0000 (0:00:04.273) 0:00:05.593 ********* 2025-08-29 14:50:35.025995 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:50:35.026008 | orchestrator | 2025-08-29 14:50:35.026143 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-08-29 14:50:35.026158 | orchestrator | Friday 29 August 2025 14:47:14 +0000 (0:00:01.272) 0:00:06.865 ********* 2025-08-29 14:50:35.026203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.026222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.026262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.026276 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.026289 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.026325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.026339 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.026352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026442 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026453 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026464 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026578 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.026589 | orchestrator | 2025-08-29 14:50:35.026601 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-08-29 14:50:35.026612 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:05.126) 0:00:11.992 ********* 2025-08-29 14:50:35.026623 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.026643 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.026660 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.026672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.026695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.026707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.026719 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:50:35.026732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.026743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.026755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.026775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.026794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.026806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.026817 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:50:35.026829 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:50:35.026854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.026866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.026878 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:50:35.026889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.026900 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:50:35.026912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.026935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.026947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.026958 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:50:35.026973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.026992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027015 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:50:35.027026 | orchestrator | 2025-08-29 14:50:35.027037 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-08-29 14:50:35.027048 | orchestrator | Friday 29 August 2025 14:47:20 +0000 (0:00:00.872) 0:00:12.865 ********* 2025-08-29 14:50:35.027059 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.027071 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027089 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.027116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027139 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:50:35.027161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.027173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027202 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:50:35.027213 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:50:35.027224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.027235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027258 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:50:35.027273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.027291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027343 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:50:35.027355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.027373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027396 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:50:35.027407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:50:35.027423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.027445 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:50:35.027457 | orchestrator | 2025-08-29 14:50:35.027467 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-08-29 14:50:35.027478 | orchestrator | Friday 29 August 2025 14:47:24 +0000 (0:00:03.423) 0:00:16.288 ********* 2025-08-29 14:50:35.027489 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:50:35.027500 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:50:35.027511 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:50:35.027522 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:50:35.027533 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:50:35.027550 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:50:35.027562 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:50:35.027572 | orchestrator | 2025-08-29 14:50:35.027583 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-08-29 14:50:35.027594 | orchestrator | Friday 29 August 2025 14:47:25 +0000 (0:00:01.093) 0:00:17.382 ********* 2025-08-29 14:50:35.027612 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:50:35.027623 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:50:35.027633 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:50:35.027644 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:50:35.027654 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:50:35.027665 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:50:35.027676 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:50:35.027686 | orchestrator | 2025-08-29 14:50:35.027697 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-08-29 14:50:35.027708 | orchestrator | Friday 29 August 2025 14:47:26 +0000 (0:00:01.646) 0:00:19.028 ********* 2025-08-29 14:50:35.027719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.027731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.027742 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.027754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.027770 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.027782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.027800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.027818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.027830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.027841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.027853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.027864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.027881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.027893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.027921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.027933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.027944 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.027955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.027967 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.027978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.027989 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.028001 | orchestrator | 2025-08-29 14:50:35.028012 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-08-29 14:50:35.028029 | orchestrator | Friday 29 August 2025 14:47:35 +0000 (0:00:09.018) 0:00:28.046 ********* 2025-08-29 14:50:35.028040 | orchestrator | [WARNING]: Skipped 2025-08-29 14:50:35.028054 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-08-29 14:50:35.028065 | orchestrator | to this access issue: 2025-08-29 14:50:35.028076 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-08-29 14:50:35.028087 | orchestrator | directory 2025-08-29 14:50:35.028098 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:50:35.028109 | orchestrator | 2025-08-29 14:50:35.028120 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-08-29 14:50:35.028131 | orchestrator | Friday 29 August 2025 14:47:37 +0000 (0:00:01.434) 0:00:29.481 ********* 2025-08-29 14:50:35.028142 | orchestrator | [WARNING]: Skipped 2025-08-29 14:50:35.028153 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-08-29 14:50:35.028169 | orchestrator | to this access issue: 2025-08-29 14:50:35.028180 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-08-29 14:50:35.028191 | orchestrator | directory 2025-08-29 14:50:35.028202 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:50:35.028213 | orchestrator | 2025-08-29 14:50:35.028224 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-08-29 14:50:35.028235 | orchestrator | Friday 29 August 2025 14:47:39 +0000 (0:00:02.120) 0:00:31.601 ********* 2025-08-29 14:50:35.028245 | orchestrator | [WARNING]: Skipped 2025-08-29 14:50:35.028257 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-08-29 14:50:35.028267 | orchestrator | to this access issue: 2025-08-29 14:50:35.028278 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-08-29 14:50:35.028289 | orchestrator | directory 2025-08-29 14:50:35.028300 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:50:35.028327 | orchestrator | 2025-08-29 14:50:35.028338 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-08-29 14:50:35.028348 | orchestrator | Friday 29 August 2025 14:47:40 +0000 (0:00:00.840) 0:00:32.441 ********* 2025-08-29 14:50:35.028359 | orchestrator | [WARNING]: Skipped 2025-08-29 14:50:35.028370 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-08-29 14:50:35.028381 | orchestrator | to this access issue: 2025-08-29 14:50:35.028392 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-08-29 14:50:35.028402 | orchestrator | directory 2025-08-29 14:50:35.028413 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:50:35.028424 | orchestrator | 2025-08-29 14:50:35.028435 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-08-29 14:50:35.028446 | orchestrator | Friday 29 August 2025 14:47:41 +0000 (0:00:00.851) 0:00:33.292 ********* 2025-08-29 14:50:35.028457 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:35.028467 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:35.028478 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:35.028489 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:35.028500 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:35.028510 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:35.028521 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:35.028532 | orchestrator | 2025-08-29 14:50:35.028543 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-08-29 14:50:35.028553 | orchestrator | Friday 29 August 2025 14:47:45 +0000 (0:00:04.368) 0:00:37.661 ********* 2025-08-29 14:50:35.028564 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:35.028576 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:35.028593 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:35.028604 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:35.028615 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:35.028626 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:35.028636 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:50:35.028647 | orchestrator | 2025-08-29 14:50:35.028658 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-08-29 14:50:35.028669 | orchestrator | Friday 29 August 2025 14:47:49 +0000 (0:00:03.988) 0:00:41.649 ********* 2025-08-29 14:50:35.028680 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:35.028691 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:35.028701 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:35.028712 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:35.028723 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:35.028733 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:35.028744 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:35.028755 | orchestrator | 2025-08-29 14:50:35.028765 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-08-29 14:50:35.028776 | orchestrator | Friday 29 August 2025 14:47:52 +0000 (0:00:02.925) 0:00:44.575 ********* 2025-08-29 14:50:35.028801 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.028819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.028831 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.028843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.028854 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.028872 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.028884 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.028901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.028913 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.028931 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.028942 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.028954 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.028976 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.028988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.029000 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.029016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.029028 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029045 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029057 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.029068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:50:35.029086 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029097 | orchestrator | 2025-08-29 14:50:35.029108 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-08-29 14:50:35.029119 | orchestrator | Friday 29 August 2025 14:47:56 +0000 (0:00:03.682) 0:00:48.258 ********* 2025-08-29 14:50:35.029130 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:35.029141 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:35.029152 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:35.029163 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:35.029174 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:35.029184 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:35.029195 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:50:35.029206 | orchestrator | 2025-08-29 14:50:35.029216 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-08-29 14:50:35.029227 | orchestrator | Friday 29 August 2025 14:47:59 +0000 (0:00:03.509) 0:00:51.767 ********* 2025-08-29 14:50:35.029238 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:35.029249 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:35.029260 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:35.029271 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:35.029281 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:35.029292 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:35.029329 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:50:35.029341 | orchestrator | 2025-08-29 14:50:35.029352 | orchestrator | TASK [common : Check common containers] **************************************** 2025-08-29 14:50:35.029362 | orchestrator | Friday 29 August 2025 14:48:02 +0000 (0:00:03.253) 0:00:55.021 ********* 2025-08-29 14:50:35.029374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.029392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.029412 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.029424 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.029435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.029446 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.029458 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:50:35.029474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029545 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029670 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029697 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029719 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:50:35.029730 | orchestrator | 2025-08-29 14:50:35.029741 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-08-29 14:50:35.029752 | orchestrator | Friday 29 August 2025 14:48:05 +0000 (0:00:03.157) 0:00:58.178 ********* 2025-08-29 14:50:35.029763 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:35.029774 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:35.029785 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:35.029796 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:35.029806 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:35.029817 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:35.029828 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:35.029838 | orchestrator | 2025-08-29 14:50:35.029849 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-08-29 14:50:35.029860 | orchestrator | Friday 29 August 2025 14:48:07 +0000 (0:00:01.474) 0:00:59.652 ********* 2025-08-29 14:50:35.029870 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:35.029881 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:35.029892 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:35.029902 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:35.029913 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:35.029923 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:35.029934 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:35.029945 | orchestrator | 2025-08-29 14:50:35.029955 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:35.029966 | orchestrator | Friday 29 August 2025 14:48:08 +0000 (0:00:01.267) 0:01:00.920 ********* 2025-08-29 14:50:35.029985 | orchestrator | 2025-08-29 14:50:35.029996 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:35.030007 | orchestrator | Friday 29 August 2025 14:48:08 +0000 (0:00:00.081) 0:01:01.001 ********* 2025-08-29 14:50:35.030052 | orchestrator | 2025-08-29 14:50:35.030071 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:35.030082 | orchestrator | Friday 29 August 2025 14:48:08 +0000 (0:00:00.067) 0:01:01.069 ********* 2025-08-29 14:50:35.030093 | orchestrator | 2025-08-29 14:50:35.030104 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:35.030115 | orchestrator | Friday 29 August 2025 14:48:08 +0000 (0:00:00.063) 0:01:01.133 ********* 2025-08-29 14:50:35.030125 | orchestrator | 2025-08-29 14:50:35.030136 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:35.030147 | orchestrator | Friday 29 August 2025 14:48:09 +0000 (0:00:00.222) 0:01:01.356 ********* 2025-08-29 14:50:35.030158 | orchestrator | 2025-08-29 14:50:35.030168 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:35.030179 | orchestrator | Friday 29 August 2025 14:48:09 +0000 (0:00:00.071) 0:01:01.427 ********* 2025-08-29 14:50:35.030190 | orchestrator | 2025-08-29 14:50:35.030201 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:50:35.030212 | orchestrator | Friday 29 August 2025 14:48:09 +0000 (0:00:00.109) 0:01:01.537 ********* 2025-08-29 14:50:35.030222 | orchestrator | 2025-08-29 14:50:35.030233 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-08-29 14:50:35.030250 | orchestrator | Friday 29 August 2025 14:48:09 +0000 (0:00:00.112) 0:01:01.650 ********* 2025-08-29 14:50:35.030261 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:35.030272 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:35.030283 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:35.030294 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:35.030385 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:35.030397 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:35.030408 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:35.030418 | orchestrator | 2025-08-29 14:50:35.030429 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-08-29 14:50:35.030440 | orchestrator | Friday 29 August 2025 14:48:59 +0000 (0:00:50.107) 0:01:51.758 ********* 2025-08-29 14:50:35.030451 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:35.030462 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:35.030473 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:35.030484 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:35.030494 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:35.030505 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:35.030515 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:35.030526 | orchestrator | 2025-08-29 14:50:35.030537 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-08-29 14:50:35.030548 | orchestrator | Friday 29 August 2025 14:50:22 +0000 (0:01:23.354) 0:03:15.113 ********* 2025-08-29 14:50:35.030559 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:50:35.030570 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:50:35.030581 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:50:35.030591 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:50:35.030602 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:50:35.030613 | orchestrator | ok: [testbed-manager] 2025-08-29 14:50:35.030624 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:50:35.030634 | orchestrator | 2025-08-29 14:50:35.030645 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-08-29 14:50:35.030656 | orchestrator | Friday 29 August 2025 14:50:25 +0000 (0:00:02.421) 0:03:17.534 ********* 2025-08-29 14:50:35.030667 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:35.030678 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:35.030689 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:35.030712 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:35.030723 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:35.030733 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:35.030744 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:35.030755 | orchestrator | 2025-08-29 14:50:35.030765 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:50:35.030778 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:35.030790 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:35.030801 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:35.030812 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:35.030823 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:35.030834 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:35.030845 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:50:35.030856 | orchestrator | 2025-08-29 14:50:35.030866 | orchestrator | 2025-08-29 14:50:35.030877 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:50:35.030888 | orchestrator | Friday 29 August 2025 14:50:33 +0000 (0:00:08.116) 0:03:25.651 ********* 2025-08-29 14:50:35.030899 | orchestrator | =============================================================================== 2025-08-29 14:50:35.030908 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 83.35s 2025-08-29 14:50:35.030916 | orchestrator | common : Restart fluentd container ------------------------------------- 50.11s 2025-08-29 14:50:35.030929 | orchestrator | common : Copying over config.json files for services -------------------- 9.02s 2025-08-29 14:50:35.030937 | orchestrator | common : Restart cron container ----------------------------------------- 8.12s 2025-08-29 14:50:35.030944 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.13s 2025-08-29 14:50:35.030952 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.37s 2025-08-29 14:50:35.030960 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.27s 2025-08-29 14:50:35.030968 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.99s 2025-08-29 14:50:35.030975 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.68s 2025-08-29 14:50:35.030983 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.51s 2025-08-29 14:50:35.030991 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.42s 2025-08-29 14:50:35.030999 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.25s 2025-08-29 14:50:35.031007 | orchestrator | common : Check common containers ---------------------------------------- 3.16s 2025-08-29 14:50:35.031015 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.93s 2025-08-29 14:50:35.031027 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.42s 2025-08-29 14:50:35.031035 | orchestrator | common : Find custom fluentd filter config files ------------------------ 2.12s 2025-08-29 14:50:35.031043 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.65s 2025-08-29 14:50:35.031051 | orchestrator | common : Creating log volume -------------------------------------------- 1.47s 2025-08-29 14:50:35.031064 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.43s 2025-08-29 14:50:35.031072 | orchestrator | common : include_tasks -------------------------------------------------- 1.27s 2025-08-29 14:50:35.031079 | orchestrator | 2025-08-29 14:50:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:38.066293 | orchestrator | 2025-08-29 14:50:38 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:50:38.069105 | orchestrator | 2025-08-29 14:50:38 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:50:38.069133 | orchestrator | 2025-08-29 14:50:38 | INFO  | Task 953acfd8-0582-4c0a-85c6-03d667907f3a is in state STARTED 2025-08-29 14:50:38.069635 | orchestrator | 2025-08-29 14:50:38 | INFO  | Task 8c7aff9e-7c60-4a7c-ae19-738e986c164a is in state STARTED 2025-08-29 14:50:38.071527 | orchestrator | 2025-08-29 14:50:38 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:38.074451 | orchestrator | 2025-08-29 14:50:38 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:38.074485 | orchestrator | 2025-08-29 14:50:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:41.105792 | orchestrator | 2025-08-29 14:50:41 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:50:41.106467 | orchestrator | 2025-08-29 14:50:41 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:50:41.106962 | orchestrator | 2025-08-29 14:50:41 | INFO  | Task 953acfd8-0582-4c0a-85c6-03d667907f3a is in state STARTED 2025-08-29 14:50:41.108268 | orchestrator | 2025-08-29 14:50:41 | INFO  | Task 8c7aff9e-7c60-4a7c-ae19-738e986c164a is in state STARTED 2025-08-29 14:50:41.109560 | orchestrator | 2025-08-29 14:50:41 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:41.110341 | orchestrator | 2025-08-29 14:50:41 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:41.110465 | orchestrator | 2025-08-29 14:50:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:44.130586 | orchestrator | 2025-08-29 14:50:44 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:50:44.130808 | orchestrator | 2025-08-29 14:50:44 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:50:44.131423 | orchestrator | 2025-08-29 14:50:44 | INFO  | Task 953acfd8-0582-4c0a-85c6-03d667907f3a is in state STARTED 2025-08-29 14:50:44.133282 | orchestrator | 2025-08-29 14:50:44 | INFO  | Task 8c7aff9e-7c60-4a7c-ae19-738e986c164a is in state STARTED 2025-08-29 14:50:44.133775 | orchestrator | 2025-08-29 14:50:44 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:44.134534 | orchestrator | 2025-08-29 14:50:44 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:44.134971 | orchestrator | 2025-08-29 14:50:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:47.165718 | orchestrator | 2025-08-29 14:50:47 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:50:47.167420 | orchestrator | 2025-08-29 14:50:47 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:50:47.168532 | orchestrator | 2025-08-29 14:50:47 | INFO  | Task 953acfd8-0582-4c0a-85c6-03d667907f3a is in state STARTED 2025-08-29 14:50:47.170309 | orchestrator | 2025-08-29 14:50:47 | INFO  | Task 8c7aff9e-7c60-4a7c-ae19-738e986c164a is in state STARTED 2025-08-29 14:50:47.172336 | orchestrator | 2025-08-29 14:50:47 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:47.173342 | orchestrator | 2025-08-29 14:50:47 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:47.173368 | orchestrator | 2025-08-29 14:50:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:50.212130 | orchestrator | 2025-08-29 14:50:50 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:50:50.217099 | orchestrator | 2025-08-29 14:50:50 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:50:50.220155 | orchestrator | 2025-08-29 14:50:50 | INFO  | Task 953acfd8-0582-4c0a-85c6-03d667907f3a is in state STARTED 2025-08-29 14:50:50.224876 | orchestrator | 2025-08-29 14:50:50 | INFO  | Task 8c7aff9e-7c60-4a7c-ae19-738e986c164a is in state STARTED 2025-08-29 14:50:50.225809 | orchestrator | 2025-08-29 14:50:50 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:50.226616 | orchestrator | 2025-08-29 14:50:50 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:50.226705 | orchestrator | 2025-08-29 14:50:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:53.454688 | orchestrator | 2025-08-29 14:50:53 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:50:53.454903 | orchestrator | 2025-08-29 14:50:53 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:50:53.456003 | orchestrator | 2025-08-29 14:50:53 | INFO  | Task 953acfd8-0582-4c0a-85c6-03d667907f3a is in state STARTED 2025-08-29 14:50:53.456530 | orchestrator | 2025-08-29 14:50:53 | INFO  | Task 8c7aff9e-7c60-4a7c-ae19-738e986c164a is in state STARTED 2025-08-29 14:50:53.457273 | orchestrator | 2025-08-29 14:50:53 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:53.457943 | orchestrator | 2025-08-29 14:50:53 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:53.457969 | orchestrator | 2025-08-29 14:50:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:56.986658 | orchestrator | 2025-08-29 14:50:56 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:50:56.986931 | orchestrator | 2025-08-29 14:50:56 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:50:56.988407 | orchestrator | 2025-08-29 14:50:56 | INFO  | Task 953acfd8-0582-4c0a-85c6-03d667907f3a is in state STARTED 2025-08-29 14:50:56.989268 | orchestrator | 2025-08-29 14:50:56 | INFO  | Task 8c7aff9e-7c60-4a7c-ae19-738e986c164a is in state STARTED 2025-08-29 14:50:56.990185 | orchestrator | 2025-08-29 14:50:56 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:50:56.990613 | orchestrator | 2025-08-29 14:50:56 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:50:56.990769 | orchestrator | 2025-08-29 14:50:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:00.063355 | orchestrator | 2025-08-29 14:51:00 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:00.063449 | orchestrator | 2025-08-29 14:51:00 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:00.063458 | orchestrator | 2025-08-29 14:51:00 | INFO  | Task 953acfd8-0582-4c0a-85c6-03d667907f3a is in state SUCCESS 2025-08-29 14:51:00.063463 | orchestrator | 2025-08-29 14:51:00 | INFO  | Task 8c7aff9e-7c60-4a7c-ae19-738e986c164a is in state STARTED 2025-08-29 14:51:00.063469 | orchestrator | 2025-08-29 14:51:00 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:00.063488 | orchestrator | 2025-08-29 14:51:00 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:00.064004 | orchestrator | 2025-08-29 14:51:00 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:51:00.064024 | orchestrator | 2025-08-29 14:51:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:03.317634 | orchestrator | 2025-08-29 14:51:03 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:03.317940 | orchestrator | 2025-08-29 14:51:03 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:03.318825 | orchestrator | 2025-08-29 14:51:03 | INFO  | Task 8c7aff9e-7c60-4a7c-ae19-738e986c164a is in state STARTED 2025-08-29 14:51:03.319668 | orchestrator | 2025-08-29 14:51:03 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:03.320516 | orchestrator | 2025-08-29 14:51:03 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:03.321248 | orchestrator | 2025-08-29 14:51:03 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state STARTED 2025-08-29 14:51:03.321330 | orchestrator | 2025-08-29 14:51:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:06.355170 | orchestrator | 2025-08-29 14:51:06 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:06.359002 | orchestrator | 2025-08-29 14:51:06 | INFO  | Task e8fdb1bc-a4b1-4826-9a77-7dfd25acb899 is in state STARTED 2025-08-29 14:51:06.359658 | orchestrator | 2025-08-29 14:51:06 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:06.360287 | orchestrator | 2025-08-29 14:51:06 | INFO  | Task 8c7aff9e-7c60-4a7c-ae19-738e986c164a is in state STARTED 2025-08-29 14:51:06.361091 | orchestrator | 2025-08-29 14:51:06 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:06.363143 | orchestrator | 2025-08-29 14:51:06 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:06.365799 | orchestrator | 2025-08-29 14:51:06 | INFO  | Task 3b15b37c-c1c1-4e9a-948d-b9ab4bd12c1d is in state SUCCESS 2025-08-29 14:51:06.367571 | orchestrator | 2025-08-29 14:51:06.367634 | orchestrator | 2025-08-29 14:51:06.367656 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:51:06.367678 | orchestrator | 2025-08-29 14:51:06.367697 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:51:06.367714 | orchestrator | Friday 29 August 2025 14:50:41 +0000 (0:00:00.293) 0:00:00.293 ********* 2025-08-29 14:51:06.367727 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.367739 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.367750 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.367761 | orchestrator | 2025-08-29 14:51:06.367772 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:51:06.367784 | orchestrator | Friday 29 August 2025 14:50:41 +0000 (0:00:00.360) 0:00:00.653 ********* 2025-08-29 14:51:06.367795 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-08-29 14:51:06.367806 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-08-29 14:51:06.367817 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-08-29 14:51:06.367828 | orchestrator | 2025-08-29 14:51:06.367839 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-08-29 14:51:06.367850 | orchestrator | 2025-08-29 14:51:06.367861 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-08-29 14:51:06.367872 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.533) 0:00:01.187 ********* 2025-08-29 14:51:06.367883 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:51:06.367914 | orchestrator | 2025-08-29 14:51:06.367926 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-08-29 14:51:06.367937 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.517) 0:00:01.705 ********* 2025-08-29 14:51:06.367948 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 14:51:06.367959 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 14:51:06.367970 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 14:51:06.367981 | orchestrator | 2025-08-29 14:51:06.367992 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-08-29 14:51:06.368003 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:00.845) 0:00:02.550 ********* 2025-08-29 14:51:06.368013 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 14:51:06.368026 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 14:51:06.368045 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 14:51:06.368062 | orchestrator | 2025-08-29 14:51:06.368081 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-08-29 14:51:06.368100 | orchestrator | Friday 29 August 2025 14:50:45 +0000 (0:00:02.096) 0:00:04.646 ********* 2025-08-29 14:51:06.368111 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.368122 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.368133 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.368147 | orchestrator | 2025-08-29 14:51:06.368160 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-08-29 14:51:06.368173 | orchestrator | Friday 29 August 2025 14:50:48 +0000 (0:00:02.290) 0:00:06.937 ********* 2025-08-29 14:51:06.368186 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.368199 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.368211 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.368224 | orchestrator | 2025-08-29 14:51:06.368236 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:06.368249 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:06.368271 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:06.368284 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:06.368336 | orchestrator | 2025-08-29 14:51:06.368350 | orchestrator | 2025-08-29 14:51:06.368362 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:06.368375 | orchestrator | Friday 29 August 2025 14:50:56 +0000 (0:00:08.730) 0:00:15.668 ********* 2025-08-29 14:51:06.368388 | orchestrator | =============================================================================== 2025-08-29 14:51:06.368401 | orchestrator | memcached : Restart memcached container --------------------------------- 8.73s 2025-08-29 14:51:06.368414 | orchestrator | memcached : Check memcached container ----------------------------------- 2.29s 2025-08-29 14:51:06.368426 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.10s 2025-08-29 14:51:06.368439 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.85s 2025-08-29 14:51:06.368451 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-08-29 14:51:06.368484 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.52s 2025-08-29 14:51:06.368506 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-08-29 14:51:06.368525 | orchestrator | 2025-08-29 14:51:06.368544 | orchestrator | 2025-08-29 14:51:06.368558 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-08-29 14:51:06.368569 | orchestrator | 2025-08-29 14:51:06.368579 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-08-29 14:51:06.368600 | orchestrator | Friday 29 August 2025 14:47:08 +0000 (0:00:00.207) 0:00:00.207 ********* 2025-08-29 14:51:06.368611 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:06.368622 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:06.368633 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:06.368644 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.368654 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.368665 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.368675 | orchestrator | 2025-08-29 14:51:06.368700 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-08-29 14:51:06.368711 | orchestrator | Friday 29 August 2025 14:47:09 +0000 (0:00:00.656) 0:00:00.864 ********* 2025-08-29 14:51:06.368722 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:06.368732 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:06.368743 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:06.368754 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.368764 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.368775 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.368785 | orchestrator | 2025-08-29 14:51:06.368796 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-08-29 14:51:06.368806 | orchestrator | Friday 29 August 2025 14:47:09 +0000 (0:00:00.656) 0:00:01.520 ********* 2025-08-29 14:51:06.368817 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:06.368827 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:06.368838 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:06.368848 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.368859 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.368870 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.368880 | orchestrator | 2025-08-29 14:51:06.368891 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-08-29 14:51:06.368901 | orchestrator | Friday 29 August 2025 14:47:10 +0000 (0:00:00.752) 0:00:02.273 ********* 2025-08-29 14:51:06.368912 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:06.368922 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:06.368933 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:06.368944 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.368954 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.368964 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.368975 | orchestrator | 2025-08-29 14:51:06.368986 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-08-29 14:51:06.368996 | orchestrator | Friday 29 August 2025 14:47:12 +0000 (0:00:01.792) 0:00:04.065 ********* 2025-08-29 14:51:06.369007 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:06.369017 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:06.369028 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:06.369038 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.369049 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.369059 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.369070 | orchestrator | 2025-08-29 14:51:06.369080 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-08-29 14:51:06.369091 | orchestrator | Friday 29 August 2025 14:47:13 +0000 (0:00:01.070) 0:00:05.135 ********* 2025-08-29 14:51:06.369102 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:06.369112 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:06.369123 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:06.369133 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.369144 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.369154 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.369164 | orchestrator | 2025-08-29 14:51:06.369175 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-08-29 14:51:06.369185 | orchestrator | Friday 29 August 2025 14:47:14 +0000 (0:00:01.177) 0:00:06.312 ********* 2025-08-29 14:51:06.369196 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:06.369212 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:06.369223 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:06.369233 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.369244 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.369254 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.369265 | orchestrator | 2025-08-29 14:51:06.369275 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-08-29 14:51:06.369286 | orchestrator | Friday 29 August 2025 14:47:15 +0000 (0:00:00.556) 0:00:06.869 ********* 2025-08-29 14:51:06.369322 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:06.369341 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:06.369367 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:06.369508 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.369525 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.369536 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.369547 | orchestrator | 2025-08-29 14:51:06.369562 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-08-29 14:51:06.369573 | orchestrator | Friday 29 August 2025 14:47:15 +0000 (0:00:00.771) 0:00:07.640 ********* 2025-08-29 14:51:06.369584 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:51:06.369595 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:51:06.369606 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:06.369616 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:51:06.369627 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:51:06.369637 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:06.369648 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:51:06.369659 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:51:06.369669 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:06.369680 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:51:06.369690 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:51:06.369701 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.369712 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:51:06.369722 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:51:06.369733 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.369743 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:51:06.369754 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:51:06.369765 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.369776 | orchestrator | 2025-08-29 14:51:06.369796 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-08-29 14:51:06.369807 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:00.630) 0:00:08.271 ********* 2025-08-29 14:51:06.369818 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:06.369829 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:06.369839 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:06.369850 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.369861 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.369871 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.369882 | orchestrator | 2025-08-29 14:51:06.369893 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-08-29 14:51:06.369912 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:01.670) 0:00:09.941 ********* 2025-08-29 14:51:06.369933 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:06.369953 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:06.369972 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:06.370002 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.370081 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.370096 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.370106 | orchestrator | 2025-08-29 14:51:06.370117 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-08-29 14:51:06.370128 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:00.884) 0:00:10.826 ********* 2025-08-29 14:51:06.370141 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.370154 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.370166 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:06.370178 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.370190 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:06.370202 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:06.370214 | orchestrator | 2025-08-29 14:51:06.370226 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-08-29 14:51:06.370238 | orchestrator | Friday 29 August 2025 14:47:24 +0000 (0:00:05.653) 0:00:16.479 ********* 2025-08-29 14:51:06.370251 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:06.370262 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:06.370274 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:06.370286 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.370385 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.370398 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.370410 | orchestrator | 2025-08-29 14:51:06.370422 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-08-29 14:51:06.370435 | orchestrator | Friday 29 August 2025 14:47:25 +0000 (0:00:01.308) 0:00:17.788 ********* 2025-08-29 14:51:06.370447 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:06.370459 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:06.370470 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:06.370480 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.370491 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.370501 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.370512 | orchestrator | 2025-08-29 14:51:06.370523 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-08-29 14:51:06.370535 | orchestrator | Friday 29 August 2025 14:47:29 +0000 (0:00:03.225) 0:00:21.014 ********* 2025-08-29 14:51:06.370545 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:06.370556 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:06.370566 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:06.370577 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.370587 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.370598 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.370608 | orchestrator | 2025-08-29 14:51:06.370619 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-08-29 14:51:06.370630 | orchestrator | Friday 29 August 2025 14:47:30 +0000 (0:00:01.428) 0:00:22.442 ********* 2025-08-29 14:51:06.370647 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-08-29 14:51:06.370658 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-08-29 14:51:06.370669 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-08-29 14:51:06.370679 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-08-29 14:51:06.370689 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-08-29 14:51:06.370700 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-08-29 14:51:06.370710 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-08-29 14:51:06.370721 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-08-29 14:51:06.370731 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-08-29 14:51:06.370742 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-08-29 14:51:06.370752 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-08-29 14:51:06.370763 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-08-29 14:51:06.370781 | orchestrator | 2025-08-29 14:51:06.370792 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-08-29 14:51:06.370803 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:03.356) 0:00:25.799 ********* 2025-08-29 14:51:06.370814 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:06.370824 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:06.370835 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:06.370845 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.370856 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.370866 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.370877 | orchestrator | 2025-08-29 14:51:06.370887 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-08-29 14:51:06.370898 | orchestrator | 2025-08-29 14:51:06.370907 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-08-29 14:51:06.370917 | orchestrator | Friday 29 August 2025 14:47:36 +0000 (0:00:02.175) 0:00:27.975 ********* 2025-08-29 14:51:06.370926 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.370936 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.370945 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.370954 | orchestrator | 2025-08-29 14:51:06.370964 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-08-29 14:51:06.370973 | orchestrator | Friday 29 August 2025 14:47:38 +0000 (0:00:02.422) 0:00:30.397 ********* 2025-08-29 14:51:06.370991 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.371001 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.371010 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.371020 | orchestrator | 2025-08-29 14:51:06.371029 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-08-29 14:51:06.371039 | orchestrator | Friday 29 August 2025 14:47:39 +0000 (0:00:01.266) 0:00:31.664 ********* 2025-08-29 14:51:06.371048 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.371057 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.371067 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.371076 | orchestrator | 2025-08-29 14:51:06.371086 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-08-29 14:51:06.371095 | orchestrator | Friday 29 August 2025 14:47:40 +0000 (0:00:00.992) 0:00:32.656 ********* 2025-08-29 14:51:06.371105 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.371114 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.371123 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.371133 | orchestrator | 2025-08-29 14:51:06.371142 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-08-29 14:51:06.371151 | orchestrator | Friday 29 August 2025 14:47:42 +0000 (0:00:01.817) 0:00:34.474 ********* 2025-08-29 14:51:06.371161 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.371170 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.371180 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.371189 | orchestrator | 2025-08-29 14:51:06.371199 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-08-29 14:51:06.371208 | orchestrator | Friday 29 August 2025 14:47:43 +0000 (0:00:00.508) 0:00:34.983 ********* 2025-08-29 14:51:06.371218 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.371227 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.371236 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.371245 | orchestrator | 2025-08-29 14:51:06.371255 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-08-29 14:51:06.371264 | orchestrator | Friday 29 August 2025 14:47:44 +0000 (0:00:01.068) 0:00:36.051 ********* 2025-08-29 14:51:06.371273 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.371283 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.371310 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.371327 | orchestrator | 2025-08-29 14:51:06.371337 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-08-29 14:51:06.371347 | orchestrator | Friday 29 August 2025 14:47:45 +0000 (0:00:01.648) 0:00:37.700 ********* 2025-08-29 14:51:06.371363 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:51:06.371373 | orchestrator | 2025-08-29 14:51:06.371382 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-08-29 14:51:06.371391 | orchestrator | Friday 29 August 2025 14:47:47 +0000 (0:00:01.130) 0:00:38.831 ********* 2025-08-29 14:51:06.371401 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.371410 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.371420 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.371429 | orchestrator | 2025-08-29 14:51:06.371439 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-08-29 14:51:06.371448 | orchestrator | Friday 29 August 2025 14:47:49 +0000 (0:00:02.175) 0:00:41.006 ********* 2025-08-29 14:51:06.371457 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.371467 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.371476 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.371485 | orchestrator | 2025-08-29 14:51:06.371495 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-08-29 14:51:06.371504 | orchestrator | Friday 29 August 2025 14:47:49 +0000 (0:00:00.506) 0:00:41.513 ********* 2025-08-29 14:51:06.371514 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.371523 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.371533 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.371542 | orchestrator | 2025-08-29 14:51:06.371555 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-08-29 14:51:06.371565 | orchestrator | Friday 29 August 2025 14:47:51 +0000 (0:00:01.984) 0:00:43.497 ********* 2025-08-29 14:51:06.371575 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.371584 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.371593 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.371603 | orchestrator | 2025-08-29 14:51:06.371612 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-08-29 14:51:06.371622 | orchestrator | Friday 29 August 2025 14:47:53 +0000 (0:00:01.548) 0:00:45.045 ********* 2025-08-29 14:51:06.371631 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.371640 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.371650 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.371659 | orchestrator | 2025-08-29 14:51:06.371669 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-08-29 14:51:06.371678 | orchestrator | Friday 29 August 2025 14:47:53 +0000 (0:00:00.575) 0:00:45.621 ********* 2025-08-29 14:51:06.371687 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.371696 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.371706 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.371715 | orchestrator | 2025-08-29 14:51:06.371724 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-08-29 14:51:06.371734 | orchestrator | Friday 29 August 2025 14:47:54 +0000 (0:00:00.437) 0:00:46.059 ********* 2025-08-29 14:51:06.371743 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.371753 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.371762 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.371771 | orchestrator | 2025-08-29 14:51:06.371780 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-08-29 14:51:06.371790 | orchestrator | Friday 29 August 2025 14:47:56 +0000 (0:00:02.568) 0:00:48.628 ********* 2025-08-29 14:51:06.371800 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 14:51:06.371809 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 14:51:06.371825 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 14:51:06.371840 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 14:51:06.371850 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 14:51:06.371859 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 14:51:06.371869 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 14:51:06.371878 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 14:51:06.371888 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 14:51:06.371897 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 14:51:06.371907 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 14:51:06.371916 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 14:51:06.371926 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-08-29 14:51:06.371936 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-08-29 14:51:06.371945 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-08-29 14:51:06.371955 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.371964 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.371974 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.371983 | orchestrator | 2025-08-29 14:51:06.371993 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-08-29 14:51:06.372002 | orchestrator | Friday 29 August 2025 14:48:52 +0000 (0:00:55.355) 0:01:43.983 ********* 2025-08-29 14:51:06.372012 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.372021 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.372030 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.372040 | orchestrator | 2025-08-29 14:51:06.372049 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-08-29 14:51:06.372058 | orchestrator | Friday 29 August 2025 14:48:52 +0000 (0:00:00.324) 0:01:44.307 ********* 2025-08-29 14:51:06.372068 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.372077 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.372086 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.372096 | orchestrator | 2025-08-29 14:51:06.372110 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-08-29 14:51:06.372120 | orchestrator | Friday 29 August 2025 14:48:53 +0000 (0:00:01.092) 0:01:45.400 ********* 2025-08-29 14:51:06.372129 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.372138 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.372147 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.372157 | orchestrator | 2025-08-29 14:51:06.372166 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-08-29 14:51:06.372176 | orchestrator | Friday 29 August 2025 14:48:54 +0000 (0:00:01.315) 0:01:46.715 ********* 2025-08-29 14:51:06.372185 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.372194 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.372204 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.372213 | orchestrator | 2025-08-29 14:51:06.372230 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-08-29 14:51:06.372239 | orchestrator | Friday 29 August 2025 14:49:20 +0000 (0:00:25.652) 0:02:12.368 ********* 2025-08-29 14:51:06.372249 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.372258 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.372268 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.372277 | orchestrator | 2025-08-29 14:51:06.372286 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-08-29 14:51:06.372352 | orchestrator | Friday 29 August 2025 14:49:21 +0000 (0:00:00.712) 0:02:13.080 ********* 2025-08-29 14:51:06.372371 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.372386 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.372398 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.372407 | orchestrator | 2025-08-29 14:51:06.372417 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-08-29 14:51:06.372427 | orchestrator | Friday 29 August 2025 14:49:21 +0000 (0:00:00.660) 0:02:13.740 ********* 2025-08-29 14:51:06.372436 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.372446 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.372455 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.372465 | orchestrator | 2025-08-29 14:51:06.372475 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-08-29 14:51:06.372484 | orchestrator | Friday 29 August 2025 14:49:22 +0000 (0:00:00.680) 0:02:14.420 ********* 2025-08-29 14:51:06.372494 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.372503 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.372520 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.372530 | orchestrator | 2025-08-29 14:51:06.372539 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-08-29 14:51:06.372549 | orchestrator | Friday 29 August 2025 14:49:23 +0000 (0:00:00.927) 0:02:15.348 ********* 2025-08-29 14:51:06.372558 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.372568 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.372577 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.372586 | orchestrator | 2025-08-29 14:51:06.372596 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-08-29 14:51:06.372605 | orchestrator | Friday 29 August 2025 14:49:23 +0000 (0:00:00.341) 0:02:15.690 ********* 2025-08-29 14:51:06.372615 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.372624 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.372633 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.372643 | orchestrator | 2025-08-29 14:51:06.372652 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-08-29 14:51:06.372662 | orchestrator | Friday 29 August 2025 14:49:24 +0000 (0:00:00.713) 0:02:16.403 ********* 2025-08-29 14:51:06.372671 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.372680 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.372690 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.372699 | orchestrator | 2025-08-29 14:51:06.372709 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-08-29 14:51:06.372718 | orchestrator | Friday 29 August 2025 14:49:25 +0000 (0:00:00.796) 0:02:17.200 ********* 2025-08-29 14:51:06.372727 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.372737 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.372746 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.372755 | orchestrator | 2025-08-29 14:51:06.372765 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-08-29 14:51:06.372774 | orchestrator | Friday 29 August 2025 14:49:26 +0000 (0:00:01.237) 0:02:18.437 ********* 2025-08-29 14:51:06.372782 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:06.372790 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:06.372798 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:06.372805 | orchestrator | 2025-08-29 14:51:06.372813 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-08-29 14:51:06.372821 | orchestrator | Friday 29 August 2025 14:49:27 +0000 (0:00:00.920) 0:02:19.357 ********* 2025-08-29 14:51:06.372834 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.372842 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.372850 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.372857 | orchestrator | 2025-08-29 14:51:06.372866 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-08-29 14:51:06.372874 | orchestrator | Friday 29 August 2025 14:49:27 +0000 (0:00:00.290) 0:02:19.648 ********* 2025-08-29 14:51:06.372881 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.372889 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.372897 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.372904 | orchestrator | 2025-08-29 14:51:06.372912 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-08-29 14:51:06.372920 | orchestrator | Friday 29 August 2025 14:49:28 +0000 (0:00:00.301) 0:02:19.950 ********* 2025-08-29 14:51:06.372927 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.372935 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.372943 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.372950 | orchestrator | 2025-08-29 14:51:06.372958 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-08-29 14:51:06.372966 | orchestrator | Friday 29 August 2025 14:49:29 +0000 (0:00:00.882) 0:02:20.832 ********* 2025-08-29 14:51:06.372974 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.372981 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.372989 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.372996 | orchestrator | 2025-08-29 14:51:06.373013 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-08-29 14:51:06.373021 | orchestrator | Friday 29 August 2025 14:49:29 +0000 (0:00:00.675) 0:02:21.507 ********* 2025-08-29 14:51:06.373029 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 14:51:06.373037 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 14:51:06.373045 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 14:51:06.373053 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 14:51:06.373060 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 14:51:06.373068 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 14:51:06.373076 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 14:51:06.373084 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 14:51:06.373091 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 14:51:06.373099 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 14:51:06.373106 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-08-29 14:51:06.373114 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 14:51:06.373122 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 14:51:06.373130 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-08-29 14:51:06.373142 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 14:51:06.373150 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 14:51:06.373158 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 14:51:06.373170 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 14:51:06.373178 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 14:51:06.373186 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 14:51:06.373194 | orchestrator | 2025-08-29 14:51:06.373202 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-08-29 14:51:06.373210 | orchestrator | 2025-08-29 14:51:06.373218 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-08-29 14:51:06.373225 | orchestrator | Friday 29 August 2025 14:49:33 +0000 (0:00:03.576) 0:02:25.084 ********* 2025-08-29 14:51:06.373233 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:06.373241 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:06.373249 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:06.373257 | orchestrator | 2025-08-29 14:51:06.373265 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-08-29 14:51:06.373273 | orchestrator | Friday 29 August 2025 14:49:33 +0000 (0:00:00.623) 0:02:25.707 ********* 2025-08-29 14:51:06.373280 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:06.373288 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:06.373312 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:06.373320 | orchestrator | 2025-08-29 14:51:06.373328 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-08-29 14:51:06.373336 | orchestrator | Friday 29 August 2025 14:49:34 +0000 (0:00:00.707) 0:02:26.415 ********* 2025-08-29 14:51:06.373349 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:06.373363 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:06.373376 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:06.373390 | orchestrator | 2025-08-29 14:51:06.373400 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-08-29 14:51:06.373408 | orchestrator | Friday 29 August 2025 14:49:35 +0000 (0:00:00.420) 0:02:26.836 ********* 2025-08-29 14:51:06.373416 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:51:06.373424 | orchestrator | 2025-08-29 14:51:06.373431 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-08-29 14:51:06.373439 | orchestrator | Friday 29 August 2025 14:49:35 +0000 (0:00:00.762) 0:02:27.599 ********* 2025-08-29 14:51:06.373447 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:06.373455 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:06.373463 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:06.373470 | orchestrator | 2025-08-29 14:51:06.373479 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-08-29 14:51:06.373493 | orchestrator | Friday 29 August 2025 14:49:36 +0000 (0:00:00.342) 0:02:27.941 ********* 2025-08-29 14:51:06.373507 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:06.373521 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:06.373534 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:06.373542 | orchestrator | 2025-08-29 14:51:06.373550 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-08-29 14:51:06.373557 | orchestrator | Friday 29 August 2025 14:49:36 +0000 (0:00:00.363) 0:02:28.304 ********* 2025-08-29 14:51:06.373565 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:06.373573 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:06.373581 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:06.373588 | orchestrator | 2025-08-29 14:51:06.373601 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-08-29 14:51:06.373609 | orchestrator | Friday 29 August 2025 14:49:36 +0000 (0:00:00.339) 0:02:28.644 ********* 2025-08-29 14:51:06.373616 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:06.373624 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:06.373632 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:06.373639 | orchestrator | 2025-08-29 14:51:06.373647 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-08-29 14:51:06.373661 | orchestrator | Friday 29 August 2025 14:49:37 +0000 (0:00:00.758) 0:02:29.402 ********* 2025-08-29 14:51:06.373669 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:06.373676 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:06.373684 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:06.373692 | orchestrator | 2025-08-29 14:51:06.373699 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-08-29 14:51:06.373707 | orchestrator | Friday 29 August 2025 14:49:39 +0000 (0:00:01.474) 0:02:30.877 ********* 2025-08-29 14:51:06.373715 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:06.373722 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:06.373730 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:06.373738 | orchestrator | 2025-08-29 14:51:06.373745 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-08-29 14:51:06.373753 | orchestrator | Friday 29 August 2025 14:49:40 +0000 (0:00:01.421) 0:02:32.299 ********* 2025-08-29 14:51:06.373761 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:06.373768 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:06.373776 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:06.373784 | orchestrator | 2025-08-29 14:51:06.373791 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 14:51:06.373799 | orchestrator | 2025-08-29 14:51:06.373807 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 14:51:06.373814 | orchestrator | Friday 29 August 2025 14:49:53 +0000 (0:00:12.739) 0:02:45.039 ********* 2025-08-29 14:51:06.373822 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:06.373830 | orchestrator | 2025-08-29 14:51:06.373837 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 14:51:06.373845 | orchestrator | Friday 29 August 2025 14:49:54 +0000 (0:00:00.850) 0:02:45.890 ********* 2025-08-29 14:51:06.373853 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:06.373860 | orchestrator | 2025-08-29 14:51:06.373873 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 14:51:06.373882 | orchestrator | Friday 29 August 2025 14:49:54 +0000 (0:00:00.492) 0:02:46.383 ********* 2025-08-29 14:51:06.373889 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 14:51:06.373897 | orchestrator | 2025-08-29 14:51:06.373905 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 14:51:06.373913 | orchestrator | Friday 29 August 2025 14:49:55 +0000 (0:00:00.602) 0:02:46.985 ********* 2025-08-29 14:51:06.373920 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:06.373928 | orchestrator | 2025-08-29 14:51:06.373936 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 14:51:06.373943 | orchestrator | Friday 29 August 2025 14:49:56 +0000 (0:00:01.124) 0:02:48.109 ********* 2025-08-29 14:51:06.373951 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:06.373959 | orchestrator | 2025-08-29 14:51:06.373967 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 14:51:06.373974 | orchestrator | Friday 29 August 2025 14:49:57 +0000 (0:00:00.722) 0:02:48.832 ********* 2025-08-29 14:51:06.373982 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:51:06.373990 | orchestrator | 2025-08-29 14:51:06.373997 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 14:51:06.374009 | orchestrator | Friday 29 August 2025 14:49:58 +0000 (0:00:01.780) 0:02:50.613 ********* 2025-08-29 14:51:06.374052 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:51:06.374061 | orchestrator | 2025-08-29 14:51:06.374069 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 14:51:06.374076 | orchestrator | Friday 29 August 2025 14:49:59 +0000 (0:00:01.038) 0:02:51.652 ********* 2025-08-29 14:51:06.374084 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:06.374092 | orchestrator | 2025-08-29 14:51:06.374099 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 14:51:06.374113 | orchestrator | Friday 29 August 2025 14:50:00 +0000 (0:00:00.507) 0:02:52.159 ********* 2025-08-29 14:51:06.374120 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:06.374128 | orchestrator | 2025-08-29 14:51:06.374136 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-08-29 14:51:06.374144 | orchestrator | 2025-08-29 14:51:06.374151 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-08-29 14:51:06.374160 | orchestrator | Friday 29 August 2025 14:50:01 +0000 (0:00:00.777) 0:02:52.937 ********* 2025-08-29 14:51:06.374173 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:06.374185 | orchestrator | 2025-08-29 14:51:06.374198 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-08-29 14:51:06.374212 | orchestrator | Friday 29 August 2025 14:50:01 +0000 (0:00:00.162) 0:02:53.100 ********* 2025-08-29 14:51:06.374225 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:51:06.374237 | orchestrator | 2025-08-29 14:51:06.374248 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-08-29 14:51:06.374260 | orchestrator | Friday 29 August 2025 14:50:01 +0000 (0:00:00.286) 0:02:53.386 ********* 2025-08-29 14:51:06.374271 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:06.374282 | orchestrator | 2025-08-29 14:51:06.374311 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-08-29 14:51:06.374323 | orchestrator | Friday 29 August 2025 14:50:02 +0000 (0:00:00.995) 0:02:54.381 ********* 2025-08-29 14:51:06.374334 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:06.374347 | orchestrator | 2025-08-29 14:51:06.374358 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-08-29 14:51:06.374378 | orchestrator | Friday 29 August 2025 14:50:04 +0000 (0:00:02.293) 0:02:56.675 ********* 2025-08-29 14:51:06.374392 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:06.374405 | orchestrator | 2025-08-29 14:51:06.374418 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-08-29 14:51:06.374432 | orchestrator | Friday 29 August 2025 14:50:05 +0000 (0:00:00.856) 0:02:57.531 ********* 2025-08-29 14:51:06.374446 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:06.374460 | orchestrator | 2025-08-29 14:51:06.374472 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-08-29 14:51:06.374492 | orchestrator | Friday 29 August 2025 14:50:06 +0000 (0:00:00.480) 0:02:58.012 ********* 2025-08-29 14:51:06.374507 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:06.374519 | orchestrator | 2025-08-29 14:51:06.374531 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-08-29 14:51:06.374543 | orchestrator | Friday 29 August 2025 14:50:15 +0000 (0:00:09.229) 0:03:07.241 ********* 2025-08-29 14:51:06.374555 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:06.374566 | orchestrator | 2025-08-29 14:51:06.374577 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-08-29 14:51:06.374589 | orchestrator | Friday 29 August 2025 14:50:31 +0000 (0:00:16.535) 0:03:23.776 ********* 2025-08-29 14:51:06.374604 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:06.374616 | orchestrator | 2025-08-29 14:51:06.374630 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-08-29 14:51:06.374644 | orchestrator | 2025-08-29 14:51:06.374657 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-08-29 14:51:06.374667 | orchestrator | Friday 29 August 2025 14:50:32 +0000 (0:00:00.675) 0:03:24.452 ********* 2025-08-29 14:51:06.374675 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.374683 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.374691 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.374698 | orchestrator | 2025-08-29 14:51:06.374706 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-08-29 14:51:06.374714 | orchestrator | Friday 29 August 2025 14:50:33 +0000 (0:00:00.383) 0:03:24.835 ********* 2025-08-29 14:51:06.374722 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.374737 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.374745 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.374752 | orchestrator | 2025-08-29 14:51:06.374760 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-08-29 14:51:06.374776 | orchestrator | Friday 29 August 2025 14:50:33 +0000 (0:00:00.344) 0:03:25.179 ********* 2025-08-29 14:51:06.374784 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:51:06.374792 | orchestrator | 2025-08-29 14:51:06.374800 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-08-29 14:51:06.374808 | orchestrator | Friday 29 August 2025 14:50:34 +0000 (0:00:00.942) 0:03:26.122 ********* 2025-08-29 14:51:06.374815 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.374823 | orchestrator | 2025-08-29 14:51:06.374831 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-08-29 14:51:06.374839 | orchestrator | Friday 29 August 2025 14:50:34 +0000 (0:00:00.219) 0:03:26.342 ********* 2025-08-29 14:51:06.374846 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.374854 | orchestrator | 2025-08-29 14:51:06.374862 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-08-29 14:51:06.374869 | orchestrator | Friday 29 August 2025 14:50:34 +0000 (0:00:00.216) 0:03:26.559 ********* 2025-08-29 14:51:06.374877 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.374885 | orchestrator | 2025-08-29 14:51:06.374893 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-08-29 14:51:06.374900 | orchestrator | Friday 29 August 2025 14:50:34 +0000 (0:00:00.223) 0:03:26.783 ********* 2025-08-29 14:51:06.374908 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.374916 | orchestrator | 2025-08-29 14:51:06.374923 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-08-29 14:51:06.374931 | orchestrator | Friday 29 August 2025 14:50:35 +0000 (0:00:00.216) 0:03:27.000 ********* 2025-08-29 14:51:06.374939 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.374946 | orchestrator | 2025-08-29 14:51:06.374954 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-08-29 14:51:06.374962 | orchestrator | Friday 29 August 2025 14:50:35 +0000 (0:00:00.250) 0:03:27.251 ********* 2025-08-29 14:51:06.374970 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.374977 | orchestrator | 2025-08-29 14:51:06.374985 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-08-29 14:51:06.374993 | orchestrator | Friday 29 August 2025 14:50:35 +0000 (0:00:00.204) 0:03:27.455 ********* 2025-08-29 14:51:06.375000 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375008 | orchestrator | 2025-08-29 14:51:06.375016 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-08-29 14:51:06.375024 | orchestrator | Friday 29 August 2025 14:50:35 +0000 (0:00:00.196) 0:03:27.651 ********* 2025-08-29 14:51:06.375031 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375039 | orchestrator | 2025-08-29 14:51:06.375047 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-08-29 14:51:06.375054 | orchestrator | Friday 29 August 2025 14:50:36 +0000 (0:00:00.222) 0:03:27.873 ********* 2025-08-29 14:51:06.375062 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375070 | orchestrator | 2025-08-29 14:51:06.375078 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-08-29 14:51:06.375085 | orchestrator | Friday 29 August 2025 14:50:36 +0000 (0:00:00.196) 0:03:28.070 ********* 2025-08-29 14:51:06.375093 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-08-29 14:51:06.375101 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-08-29 14:51:06.375109 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375116 | orchestrator | 2025-08-29 14:51:06.375124 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-08-29 14:51:06.375132 | orchestrator | Friday 29 August 2025 14:50:37 +0000 (0:00:00.795) 0:03:28.865 ********* 2025-08-29 14:51:06.375145 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375153 | orchestrator | 2025-08-29 14:51:06.375165 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-08-29 14:51:06.375173 | orchestrator | Friday 29 August 2025 14:50:37 +0000 (0:00:00.221) 0:03:29.087 ********* 2025-08-29 14:51:06.375181 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375188 | orchestrator | 2025-08-29 14:51:06.375196 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-08-29 14:51:06.375204 | orchestrator | Friday 29 August 2025 14:50:37 +0000 (0:00:00.198) 0:03:29.285 ********* 2025-08-29 14:51:06.375211 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375219 | orchestrator | 2025-08-29 14:51:06.375227 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-08-29 14:51:06.375235 | orchestrator | Friday 29 August 2025 14:50:37 +0000 (0:00:00.201) 0:03:29.486 ********* 2025-08-29 14:51:06.375242 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375250 | orchestrator | 2025-08-29 14:51:06.375258 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-08-29 14:51:06.375265 | orchestrator | Friday 29 August 2025 14:50:37 +0000 (0:00:00.197) 0:03:29.684 ********* 2025-08-29 14:51:06.375273 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375281 | orchestrator | 2025-08-29 14:51:06.375288 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-08-29 14:51:06.375422 | orchestrator | Friday 29 August 2025 14:50:38 +0000 (0:00:00.188) 0:03:29.872 ********* 2025-08-29 14:51:06.375434 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375442 | orchestrator | 2025-08-29 14:51:06.375449 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-08-29 14:51:06.375455 | orchestrator | Friday 29 August 2025 14:50:38 +0000 (0:00:00.187) 0:03:30.060 ********* 2025-08-29 14:51:06.375462 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375468 | orchestrator | 2025-08-29 14:51:06.375475 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-08-29 14:51:06.375482 | orchestrator | Friday 29 August 2025 14:50:38 +0000 (0:00:00.205) 0:03:30.266 ********* 2025-08-29 14:51:06.375488 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375495 | orchestrator | 2025-08-29 14:51:06.375501 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-08-29 14:51:06.375508 | orchestrator | Friday 29 August 2025 14:50:38 +0000 (0:00:00.180) 0:03:30.447 ********* 2025-08-29 14:51:06.375523 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375530 | orchestrator | 2025-08-29 14:51:06.375537 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-08-29 14:51:06.375543 | orchestrator | Friday 29 August 2025 14:50:38 +0000 (0:00:00.205) 0:03:30.653 ********* 2025-08-29 14:51:06.375550 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375557 | orchestrator | 2025-08-29 14:51:06.375563 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-08-29 14:51:06.375570 | orchestrator | Friday 29 August 2025 14:50:39 +0000 (0:00:00.223) 0:03:30.876 ********* 2025-08-29 14:51:06.375576 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375583 | orchestrator | 2025-08-29 14:51:06.375590 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-08-29 14:51:06.375596 | orchestrator | Friday 29 August 2025 14:50:39 +0000 (0:00:00.207) 0:03:31.084 ********* 2025-08-29 14:51:06.375603 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-08-29 14:51:06.375609 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-08-29 14:51:06.375616 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-08-29 14:51:06.375623 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-08-29 14:51:06.375629 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375636 | orchestrator | 2025-08-29 14:51:06.375642 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-08-29 14:51:06.375656 | orchestrator | Friday 29 August 2025 14:50:40 +0000 (0:00:00.958) 0:03:32.042 ********* 2025-08-29 14:51:06.375662 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375669 | orchestrator | 2025-08-29 14:51:06.375676 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-08-29 14:51:06.375682 | orchestrator | Friday 29 August 2025 14:50:40 +0000 (0:00:00.248) 0:03:32.291 ********* 2025-08-29 14:51:06.375689 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375695 | orchestrator | 2025-08-29 14:51:06.375702 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-08-29 14:51:06.375709 | orchestrator | Friday 29 August 2025 14:50:40 +0000 (0:00:00.217) 0:03:32.508 ********* 2025-08-29 14:51:06.375715 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375722 | orchestrator | 2025-08-29 14:51:06.375728 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-08-29 14:51:06.375735 | orchestrator | Friday 29 August 2025 14:50:40 +0000 (0:00:00.224) 0:03:32.733 ********* 2025-08-29 14:51:06.375742 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375748 | orchestrator | 2025-08-29 14:51:06.375755 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-08-29 14:51:06.375761 | orchestrator | Friday 29 August 2025 14:50:41 +0000 (0:00:00.199) 0:03:32.932 ********* 2025-08-29 14:51:06.375768 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-08-29 14:51:06.375774 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-08-29 14:51:06.375781 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375788 | orchestrator | 2025-08-29 14:51:06.375794 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-08-29 14:51:06.375801 | orchestrator | Friday 29 August 2025 14:50:41 +0000 (0:00:00.289) 0:03:33.221 ********* 2025-08-29 14:51:06.375807 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.375814 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.375821 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.375827 | orchestrator | 2025-08-29 14:51:06.375834 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-08-29 14:51:06.375840 | orchestrator | Friday 29 August 2025 14:50:41 +0000 (0:00:00.307) 0:03:33.529 ********* 2025-08-29 14:51:06.375847 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.375854 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.375861 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.375867 | orchestrator | 2025-08-29 14:51:06.375880 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-08-29 14:51:06.375887 | orchestrator | 2025-08-29 14:51:06.375894 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-08-29 14:51:06.375901 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.935) 0:03:34.465 ********* 2025-08-29 14:51:06.375907 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:06.375914 | orchestrator | 2025-08-29 14:51:06.375920 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-08-29 14:51:06.375927 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.100) 0:03:34.565 ********* 2025-08-29 14:51:06.375933 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:51:06.375940 | orchestrator | 2025-08-29 14:51:06.375946 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-08-29 14:51:06.375953 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.227) 0:03:34.792 ********* 2025-08-29 14:51:06.375960 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:06.375966 | orchestrator | 2025-08-29 14:51:06.375973 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-08-29 14:51:06.375979 | orchestrator | 2025-08-29 14:51:06.375986 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-08-29 14:51:06.375993 | orchestrator | Friday 29 August 2025 14:50:49 +0000 (0:00:06.203) 0:03:40.996 ********* 2025-08-29 14:51:06.376004 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:06.376011 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:06.376017 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:06.376024 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:06.376030 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:06.376037 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:06.376043 | orchestrator | 2025-08-29 14:51:06.376050 | orchestrator | TASK [Manage labels] *********************************************************** 2025-08-29 14:51:06.376057 | orchestrator | Friday 29 August 2025 14:50:49 +0000 (0:00:00.792) 0:03:41.788 ********* 2025-08-29 14:51:06.376063 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 14:51:06.376074 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 14:51:06.376080 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 14:51:06.376087 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 14:51:06.376094 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 14:51:06.376100 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 14:51:06.376107 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 14:51:06.376113 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 14:51:06.376120 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 14:51:06.376126 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 14:51:06.376133 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 14:51:06.376140 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 14:51:06.376146 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 14:51:06.376153 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 14:51:06.376159 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 14:51:06.376166 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 14:51:06.376173 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 14:51:06.376179 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 14:51:06.376186 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 14:51:06.376192 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 14:51:06.376199 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 14:51:06.376206 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 14:51:06.376212 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 14:51:06.376219 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 14:51:06.376225 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 14:51:06.376232 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 14:51:06.376239 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 14:51:06.376245 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 14:51:06.376255 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 14:51:06.376265 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 14:51:06.376272 | orchestrator | 2025-08-29 14:51:06.376278 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-08-29 14:51:06.376285 | orchestrator | Friday 29 August 2025 14:51:02 +0000 (0:00:12.560) 0:03:54.348 ********* 2025-08-29 14:51:06.376306 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:06.376313 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:06.376319 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:06.376326 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.376333 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.376339 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.376346 | orchestrator | 2025-08-29 14:51:06.376352 | orchestrator | TASK [Manage taints] *********************************************************** 2025-08-29 14:51:06.376359 | orchestrator | Friday 29 August 2025 14:51:03 +0000 (0:00:00.673) 0:03:55.022 ********* 2025-08-29 14:51:06.376365 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:06.376372 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:06.376386 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:06.376393 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:06.376399 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:06.376412 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:06.376419 | orchestrator | 2025-08-29 14:51:06.376426 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:06.376432 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:06.376439 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-08-29 14:51:06.376446 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 14:51:06.376457 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 14:51:06.376464 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 14:51:06.376471 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 14:51:06.376477 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 14:51:06.376484 | orchestrator | 2025-08-29 14:51:06.376490 | orchestrator | 2025-08-29 14:51:06.376497 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:06.376504 | orchestrator | Friday 29 August 2025 14:51:03 +0000 (0:00:00.514) 0:03:55.536 ********* 2025-08-29 14:51:06.376510 | orchestrator | =============================================================================== 2025-08-29 14:51:06.376517 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.36s 2025-08-29 14:51:06.376524 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.65s 2025-08-29 14:51:06.376530 | orchestrator | kubectl : Install required packages ------------------------------------ 16.54s 2025-08-29 14:51:06.376537 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.74s 2025-08-29 14:51:06.376543 | orchestrator | Manage labels ---------------------------------------------------------- 12.56s 2025-08-29 14:51:06.376550 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.23s 2025-08-29 14:51:06.376556 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.20s 2025-08-29 14:51:06.376567 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.65s 2025-08-29 14:51:06.376574 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.58s 2025-08-29 14:51:06.376580 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 3.36s 2025-08-29 14:51:06.376587 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.23s 2025-08-29 14:51:06.376593 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.57s 2025-08-29 14:51:06.376600 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.42s 2025-08-29 14:51:06.376606 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.29s 2025-08-29 14:51:06.376613 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.18s 2025-08-29 14:51:06.376619 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.18s 2025-08-29 14:51:06.376626 | orchestrator | k3s_server : Download vip rbac manifest to first master ----------------- 1.98s 2025-08-29 14:51:06.376632 | orchestrator | k3s_server : Clean previous runs of k3s-init ---------------------------- 1.82s 2025-08-29 14:51:06.376639 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.79s 2025-08-29 14:51:06.376645 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.78s 2025-08-29 14:51:06.376658 | orchestrator | 2025-08-29 14:51:06 | INFO  | Task 1295dc13-60dc-42ef-afe3-50ba1f9e4785 is in state STARTED 2025-08-29 14:51:06.376665 | orchestrator | 2025-08-29 14:51:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:09.408161 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:09.409076 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task e8fdb1bc-a4b1-4826-9a77-7dfd25acb899 is in state STARTED 2025-08-29 14:51:09.411562 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:09.412965 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task 8c7aff9e-7c60-4a7c-ae19-738e986c164a is in state STARTED 2025-08-29 14:51:09.414947 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:09.415450 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:09.416611 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task 1295dc13-60dc-42ef-afe3-50ba1f9e4785 is in state STARTED 2025-08-29 14:51:09.416847 | orchestrator | 2025-08-29 14:51:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:12.461133 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:12.461332 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task e8fdb1bc-a4b1-4826-9a77-7dfd25acb899 is in state STARTED 2025-08-29 14:51:12.461747 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:12.462424 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task 8c7aff9e-7c60-4a7c-ae19-738e986c164a is in state STARTED 2025-08-29 14:51:12.463544 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:12.466596 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:12.467045 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task 1295dc13-60dc-42ef-afe3-50ba1f9e4785 is in state SUCCESS 2025-08-29 14:51:12.467065 | orchestrator | 2025-08-29 14:51:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:15.506308 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:15.506795 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task e8fdb1bc-a4b1-4826-9a77-7dfd25acb899 is in state STARTED 2025-08-29 14:51:15.507526 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:15.508475 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task 8c7aff9e-7c60-4a7c-ae19-738e986c164a is in state SUCCESS 2025-08-29 14:51:15.509402 | orchestrator | 2025-08-29 14:51:15.509500 | orchestrator | 2025-08-29 14:51:15.509511 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-08-29 14:51:15.509520 | orchestrator | 2025-08-29 14:51:15.509527 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 14:51:15.509535 | orchestrator | Friday 29 August 2025 14:51:09 +0000 (0:00:00.214) 0:00:00.214 ********* 2025-08-29 14:51:15.509542 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 14:51:15.509549 | orchestrator | 2025-08-29 14:51:15.509556 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 14:51:15.509563 | orchestrator | Friday 29 August 2025 14:51:10 +0000 (0:00:00.793) 0:00:01.007 ********* 2025-08-29 14:51:15.509569 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:15.509575 | orchestrator | 2025-08-29 14:51:15.509582 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-08-29 14:51:15.509588 | orchestrator | Friday 29 August 2025 14:51:11 +0000 (0:00:01.163) 0:00:02.171 ********* 2025-08-29 14:51:15.509594 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:15.509600 | orchestrator | 2025-08-29 14:51:15.509605 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:15.509612 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:15.509619 | orchestrator | 2025-08-29 14:51:15.509625 | orchestrator | 2025-08-29 14:51:15.509631 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:15.509638 | orchestrator | Friday 29 August 2025 14:51:11 +0000 (0:00:00.415) 0:00:02.587 ********* 2025-08-29 14:51:15.509644 | orchestrator | =============================================================================== 2025-08-29 14:51:15.509650 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.16s 2025-08-29 14:51:15.509657 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.79s 2025-08-29 14:51:15.509663 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.42s 2025-08-29 14:51:15.509670 | orchestrator | 2025-08-29 14:51:15.509676 | orchestrator | 2025-08-29 14:51:15.509682 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:51:15.509689 | orchestrator | 2025-08-29 14:51:15.509705 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:51:15.509712 | orchestrator | Friday 29 August 2025 14:50:40 +0000 (0:00:00.470) 0:00:00.470 ********* 2025-08-29 14:51:15.509718 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:15.509725 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:15.509731 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:15.509738 | orchestrator | 2025-08-29 14:51:15.509745 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:51:15.509751 | orchestrator | Friday 29 August 2025 14:50:40 +0000 (0:00:00.536) 0:00:01.007 ********* 2025-08-29 14:51:15.509758 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-08-29 14:51:15.509765 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-08-29 14:51:15.509772 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-08-29 14:51:15.509779 | orchestrator | 2025-08-29 14:51:15.509785 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-08-29 14:51:15.509809 | orchestrator | 2025-08-29 14:51:15.509817 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-08-29 14:51:15.509824 | orchestrator | Friday 29 August 2025 14:50:41 +0000 (0:00:00.569) 0:00:01.577 ********* 2025-08-29 14:51:15.509830 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:51:15.509838 | orchestrator | 2025-08-29 14:51:15.509844 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-08-29 14:51:15.509851 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.580) 0:00:02.157 ********* 2025-08-29 14:51:15.509859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.509870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.509888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.509896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.509904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.509915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.509927 | orchestrator | 2025-08-29 14:51:15.509934 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-08-29 14:51:15.509940 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:01.235) 0:00:03.393 ********* 2025-08-29 14:51:15.509945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.509951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.509957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.509969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.509975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.509984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.509994 | orchestrator | 2025-08-29 14:51:15.510000 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-08-29 14:51:15.510006 | orchestrator | Friday 29 August 2025 14:50:45 +0000 (0:00:02.661) 0:00:06.055 ********* 2025-08-29 14:51:15.510012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.510070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.510076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.510089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.510095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.510104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.510115 | orchestrator | 2025-08-29 14:51:15.510122 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-08-29 14:51:15.510128 | orchestrator | Friday 29 August 2025 14:50:48 +0000 (0:00:03.025) 0:00:09.080 ********* 2025-08-29 14:51:15.510135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.510142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.510149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.510161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.510168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.510178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:51:15.510188 | orchestrator | 2025-08-29 14:51:15.510195 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 14:51:15.510202 | orchestrator | Friday 29 August 2025 14:50:50 +0000 (0:00:01.899) 0:00:10.980 ********* 2025-08-29 14:51:15.510208 | orchestrator | 2025-08-29 14:51:15.510215 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 14:51:15.510222 | orchestrator | Friday 29 August 2025 14:50:50 +0000 (0:00:00.069) 0:00:11.050 ********* 2025-08-29 14:51:15.510228 | orchestrator | 2025-08-29 14:51:15.510234 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 14:51:15.510240 | orchestrator | Friday 29 August 2025 14:50:51 +0000 (0:00:00.078) 0:00:11.128 ********* 2025-08-29 14:51:15.510246 | orchestrator | 2025-08-29 14:51:15.510252 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-08-29 14:51:15.510258 | orchestrator | Friday 29 August 2025 14:50:51 +0000 (0:00:00.071) 0:00:11.200 ********* 2025-08-29 14:51:15.510264 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:15.510271 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:15.510277 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:15.510284 | orchestrator | 2025-08-29 14:51:15.510324 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-08-29 14:51:15.510330 | orchestrator | Friday 29 August 2025 14:51:02 +0000 (0:00:11.820) 0:00:23.021 ********* 2025-08-29 14:51:15.510337 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:15.510343 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:15.510350 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:15.510356 | orchestrator | 2025-08-29 14:51:15.510363 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:15.510370 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:15.510376 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:15.510383 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:15.510391 | orchestrator | 2025-08-29 14:51:15.510399 | orchestrator | 2025-08-29 14:51:15.510407 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:15.510415 | orchestrator | Friday 29 August 2025 14:51:13 +0000 (0:00:10.537) 0:00:33.558 ********* 2025-08-29 14:51:15.510423 | orchestrator | =============================================================================== 2025-08-29 14:51:15.510430 | orchestrator | redis : Restart redis container ---------------------------------------- 11.82s 2025-08-29 14:51:15.510436 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.54s 2025-08-29 14:51:15.510443 | orchestrator | redis : Copying over redis config files --------------------------------- 3.03s 2025-08-29 14:51:15.510450 | orchestrator | redis : Copying over default config.json files -------------------------- 2.66s 2025-08-29 14:51:15.510458 | orchestrator | redis : Check redis containers ------------------------------------------ 1.90s 2025-08-29 14:51:15.510466 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.24s 2025-08-29 14:51:15.510474 | orchestrator | redis : include_tasks --------------------------------------------------- 0.58s 2025-08-29 14:51:15.510489 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-08-29 14:51:15.510502 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.54s 2025-08-29 14:51:15.510510 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2025-08-29 14:51:15.510518 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:15.511601 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:15.513060 | orchestrator | 2025-08-29 14:51:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:18.611698 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:18.611777 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task e8fdb1bc-a4b1-4826-9a77-7dfd25acb899 is in state SUCCESS 2025-08-29 14:51:18.611785 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:18.611791 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:18.611798 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:18.611804 | orchestrator | 2025-08-29 14:51:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:21.643703 | orchestrator | 2025-08-29 14:51:21 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:21.645367 | orchestrator | 2025-08-29 14:51:21 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:21.648532 | orchestrator | 2025-08-29 14:51:21 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:21.649457 | orchestrator | 2025-08-29 14:51:21 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:21.649504 | orchestrator | 2025-08-29 14:51:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:24.694452 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:24.694842 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:24.695583 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:24.696461 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:24.696499 | orchestrator | 2025-08-29 14:51:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:27.755897 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:27.756606 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:27.756647 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:27.756656 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:27.756664 | orchestrator | 2025-08-29 14:51:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:30.795814 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:30.796350 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:30.798061 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:30.799433 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:30.799760 | orchestrator | 2025-08-29 14:51:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:33.835644 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:33.836594 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:33.837791 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:33.840217 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:33.840264 | orchestrator | 2025-08-29 14:51:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:36.928999 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:36.929822 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:36.931390 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:36.935668 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:36.935713 | orchestrator | 2025-08-29 14:51:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:39.988258 | orchestrator | 2025-08-29 14:51:39 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:39.990182 | orchestrator | 2025-08-29 14:51:39 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:39.993616 | orchestrator | 2025-08-29 14:51:39 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:39.993688 | orchestrator | 2025-08-29 14:51:39 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:39.993702 | orchestrator | 2025-08-29 14:51:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:43.052446 | orchestrator | 2025-08-29 14:51:43 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:43.052980 | orchestrator | 2025-08-29 14:51:43 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:43.053728 | orchestrator | 2025-08-29 14:51:43 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:43.056457 | orchestrator | 2025-08-29 14:51:43 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:43.056536 | orchestrator | 2025-08-29 14:51:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:46.116187 | orchestrator | 2025-08-29 14:51:46 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:46.117345 | orchestrator | 2025-08-29 14:51:46 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:46.119172 | orchestrator | 2025-08-29 14:51:46 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:46.119595 | orchestrator | 2025-08-29 14:51:46 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:46.119625 | orchestrator | 2025-08-29 14:51:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:49.166961 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:49.170991 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:49.171407 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:49.173355 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:49.173407 | orchestrator | 2025-08-29 14:51:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:52.230245 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state STARTED 2025-08-29 14:51:52.233543 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:52.234818 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:52.239017 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:52.239100 | orchestrator | 2025-08-29 14:51:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:55.287339 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task f5f127fc-0609-4463-b3f7-335c65b22f0a is in state SUCCESS 2025-08-29 14:51:55.288596 | orchestrator | 2025-08-29 14:51:55.288689 | orchestrator | 2025-08-29 14:51:55.288699 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 14:51:55.288707 | orchestrator | 2025-08-29 14:51:55.288712 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 14:51:55.288717 | orchestrator | Friday 29 August 2025 14:51:09 +0000 (0:00:00.203) 0:00:00.203 ********* 2025-08-29 14:51:55.288721 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:55.288726 | orchestrator | 2025-08-29 14:51:55.288730 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 14:51:55.288734 | orchestrator | Friday 29 August 2025 14:51:09 +0000 (0:00:00.564) 0:00:00.767 ********* 2025-08-29 14:51:55.288738 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:55.288742 | orchestrator | 2025-08-29 14:51:55.288747 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 14:51:55.288753 | orchestrator | Friday 29 August 2025 14:51:10 +0000 (0:00:00.554) 0:00:01.322 ********* 2025-08-29 14:51:55.288764 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 14:51:55.288770 | orchestrator | 2025-08-29 14:51:55.288776 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 14:51:55.288782 | orchestrator | Friday 29 August 2025 14:51:11 +0000 (0:00:00.801) 0:00:02.123 ********* 2025-08-29 14:51:55.288788 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:55.288795 | orchestrator | 2025-08-29 14:51:55.288801 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 14:51:55.288805 | orchestrator | Friday 29 August 2025 14:51:12 +0000 (0:00:01.118) 0:00:03.242 ********* 2025-08-29 14:51:55.288809 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:55.288813 | orchestrator | 2025-08-29 14:51:55.288816 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 14:51:55.288820 | orchestrator | Friday 29 August 2025 14:51:13 +0000 (0:00:00.839) 0:00:04.081 ********* 2025-08-29 14:51:55.288824 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:51:55.288828 | orchestrator | 2025-08-29 14:51:55.288832 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 14:51:55.288836 | orchestrator | Friday 29 August 2025 14:51:15 +0000 (0:00:02.070) 0:00:06.152 ********* 2025-08-29 14:51:55.288840 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:51:55.288844 | orchestrator | 2025-08-29 14:51:55.288848 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 14:51:55.288851 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:01.145) 0:00:07.297 ********* 2025-08-29 14:51:55.288871 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:55.288875 | orchestrator | 2025-08-29 14:51:55.288879 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 14:51:55.288883 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:00.391) 0:00:07.689 ********* 2025-08-29 14:51:55.288887 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:55.288890 | orchestrator | 2025-08-29 14:51:55.288894 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:55.288909 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:55.288915 | orchestrator | 2025-08-29 14:51:55.288919 | orchestrator | 2025-08-29 14:51:55.288923 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:55.288927 | orchestrator | Friday 29 August 2025 14:51:17 +0000 (0:00:00.363) 0:00:08.052 ********* 2025-08-29 14:51:55.288930 | orchestrator | =============================================================================== 2025-08-29 14:51:55.288934 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.07s 2025-08-29 14:51:55.288938 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.15s 2025-08-29 14:51:55.288942 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.12s 2025-08-29 14:51:55.288946 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.84s 2025-08-29 14:51:55.288949 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2025-08-29 14:51:55.288953 | orchestrator | Get home directory of operator user ------------------------------------- 0.56s 2025-08-29 14:51:55.288957 | orchestrator | Create .kube directory -------------------------------------------------- 0.55s 2025-08-29 14:51:55.288961 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.39s 2025-08-29 14:51:55.288964 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.36s 2025-08-29 14:51:55.288968 | orchestrator | 2025-08-29 14:51:55.288972 | orchestrator | 2025-08-29 14:51:55.288975 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:51:55.288979 | orchestrator | 2025-08-29 14:51:55.288983 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:51:55.288989 | orchestrator | Friday 29 August 2025 14:50:40 +0000 (0:00:00.512) 0:00:00.512 ********* 2025-08-29 14:51:55.288995 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:55.289001 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:55.289007 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:55.289013 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:55.289019 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:55.289025 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:55.289030 | orchestrator | 2025-08-29 14:51:55.289036 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:51:55.289042 | orchestrator | Friday 29 August 2025 14:50:41 +0000 (0:00:01.120) 0:00:01.632 ********* 2025-08-29 14:51:55.289048 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:51:55.289102 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:51:55.289108 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:51:55.289113 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:51:55.289135 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:51:55.289142 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:51:55.289148 | orchestrator | 2025-08-29 14:51:55.289153 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-08-29 14:51:55.289159 | orchestrator | 2025-08-29 14:51:55.289164 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-08-29 14:51:55.289178 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.938) 0:00:02.571 ********* 2025-08-29 14:51:55.289185 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:51:55.289193 | orchestrator | 2025-08-29 14:51:55.289198 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 14:51:55.289204 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:01.175) 0:00:03.746 ********* 2025-08-29 14:51:55.289210 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 14:51:55.289217 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 14:51:55.289223 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 14:51:55.289229 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 14:51:55.289235 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 14:51:55.289241 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 14:51:55.289247 | orchestrator | 2025-08-29 14:51:55.289252 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 14:51:55.289256 | orchestrator | Friday 29 August 2025 14:50:44 +0000 (0:00:01.526) 0:00:05.273 ********* 2025-08-29 14:51:55.289260 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 14:51:55.289264 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 14:51:55.289267 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 14:51:55.289271 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 14:51:55.289303 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 14:51:55.289308 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 14:51:55.289311 | orchestrator | 2025-08-29 14:51:55.289315 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 14:51:55.289319 | orchestrator | Friday 29 August 2025 14:50:46 +0000 (0:00:01.668) 0:00:06.942 ********* 2025-08-29 14:51:55.289323 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-08-29 14:51:55.289326 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:55.289330 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-08-29 14:51:55.289334 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:55.289338 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-08-29 14:51:55.289341 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:55.289345 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-08-29 14:51:55.289349 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:55.289353 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-08-29 14:51:55.289357 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:55.289361 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-08-29 14:51:55.289365 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:55.289368 | orchestrator | 2025-08-29 14:51:55.289372 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-08-29 14:51:55.289376 | orchestrator | Friday 29 August 2025 14:50:48 +0000 (0:00:01.539) 0:00:08.482 ********* 2025-08-29 14:51:55.289380 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:55.289384 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:55.289387 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:55.289391 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:55.289395 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:55.289400 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:55.289406 | orchestrator | 2025-08-29 14:51:55.289411 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-08-29 14:51:55.289420 | orchestrator | Friday 29 August 2025 14:50:49 +0000 (0:00:01.081) 0:00:09.563 ********* 2025-08-29 14:51:55.290149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290220 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290246 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290319 | orchestrator | 2025-08-29 14:51:55.290326 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-08-29 14:51:55.290333 | orchestrator | Friday 29 August 2025 14:50:51 +0000 (0:00:01.991) 0:00:11.555 ********* 2025-08-29 14:51:55.290339 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290369 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290377 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290420 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290427 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290449 | orchestrator | 2025-08-29 14:51:55.290453 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-08-29 14:51:55.290457 | orchestrator | Friday 29 August 2025 14:50:55 +0000 (0:00:03.890) 0:00:15.445 ********* 2025-08-29 14:51:55.290474 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:55.290478 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:55.290483 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:51:55.290490 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:51:55.290509 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:55.290518 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:51:55.290524 | orchestrator | 2025-08-29 14:51:55.290529 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-08-29 14:51:55.290535 | orchestrator | Friday 29 August 2025 14:50:56 +0000 (0:00:01.484) 0:00:16.929 ********* 2025-08-29 14:51:55.290547 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290557 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290583 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290594 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290601 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:51:55.290663 | orchestrator | 2025-08-29 14:51:55.290670 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:51:55.290676 | orchestrator | Friday 29 August 2025 14:50:59 +0000 (0:00:02.583) 0:00:19.512 ********* 2025-08-29 14:51:55.290684 | orchestrator | 2025-08-29 14:51:55.290688 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:51:55.290692 | orchestrator | Friday 29 August 2025 14:50:59 +0000 (0:00:00.308) 0:00:19.821 ********* 2025-08-29 14:51:55.290696 | orchestrator | 2025-08-29 14:51:55.290700 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:51:55.290704 | orchestrator | Friday 29 August 2025 14:50:59 +0000 (0:00:00.239) 0:00:20.060 ********* 2025-08-29 14:51:55.290708 | orchestrator | 2025-08-29 14:51:55.290712 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:51:55.290716 | orchestrator | Friday 29 August 2025 14:50:59 +0000 (0:00:00.287) 0:00:20.347 ********* 2025-08-29 14:51:55.290720 | orchestrator | 2025-08-29 14:51:55.290724 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:51:55.290731 | orchestrator | Friday 29 August 2025 14:51:00 +0000 (0:00:00.243) 0:00:20.590 ********* 2025-08-29 14:51:55.290736 | orchestrator | 2025-08-29 14:51:55.290744 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:51:55.290748 | orchestrator | Friday 29 August 2025 14:51:00 +0000 (0:00:00.366) 0:00:20.956 ********* 2025-08-29 14:51:55.290752 | orchestrator | 2025-08-29 14:51:55.290757 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-08-29 14:51:55.290760 | orchestrator | Friday 29 August 2025 14:51:00 +0000 (0:00:00.272) 0:00:21.228 ********* 2025-08-29 14:51:55.290764 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:55.290768 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:55.290792 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:55.290799 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:55.290803 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:55.290808 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:55.290812 | orchestrator | 2025-08-29 14:51:55.290816 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-08-29 14:51:55.290820 | orchestrator | Friday 29 August 2025 14:51:15 +0000 (0:00:14.771) 0:00:36.000 ********* 2025-08-29 14:51:55.290824 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:55.290829 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:55.290834 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:55.290838 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:55.290842 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:55.290846 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:55.290850 | orchestrator | 2025-08-29 14:51:55.290854 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 14:51:55.290858 | orchestrator | Friday 29 August 2025 14:51:17 +0000 (0:00:02.165) 0:00:38.166 ********* 2025-08-29 14:51:55.290862 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:55.290866 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:55.290870 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:55.290874 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:55.290878 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:55.290882 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:55.290886 | orchestrator | 2025-08-29 14:51:55.290890 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-08-29 14:51:55.290894 | orchestrator | Friday 29 August 2025 14:51:28 +0000 (0:00:10.869) 0:00:49.035 ********* 2025-08-29 14:51:55.290899 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-08-29 14:51:55.290904 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-08-29 14:51:55.290908 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-08-29 14:51:55.290912 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-08-29 14:51:55.290916 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-08-29 14:51:55.290920 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-08-29 14:51:55.290924 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-08-29 14:51:55.290929 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-08-29 14:51:55.290933 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-08-29 14:51:55.290937 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-08-29 14:51:55.290943 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-08-29 14:51:55.290950 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-08-29 14:51:55.290965 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:51:55.290978 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:51:55.290985 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:51:55.290992 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:51:55.290999 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:51:55.291007 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:51:55.291011 | orchestrator | 2025-08-29 14:51:55.291015 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-08-29 14:51:55.291019 | orchestrator | Friday 29 August 2025 14:51:36 +0000 (0:00:07.958) 0:00:56.994 ********* 2025-08-29 14:51:55.291023 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-08-29 14:51:55.291027 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:55.291031 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-08-29 14:51:55.291034 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:55.291038 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-08-29 14:51:55.291042 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:55.291051 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-08-29 14:51:55.291058 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-08-29 14:51:55.291064 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-08-29 14:51:55.291070 | orchestrator | 2025-08-29 14:51:55.291077 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-08-29 14:51:55.291084 | orchestrator | Friday 29 August 2025 14:51:39 +0000 (0:00:02.931) 0:00:59.926 ********* 2025-08-29 14:51:55.291090 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-08-29 14:51:55.291096 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:51:55.291112 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-08-29 14:51:55.291117 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:51:55.291121 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-08-29 14:51:55.291125 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:51:55.291129 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-08-29 14:51:55.291134 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-08-29 14:51:55.291138 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-08-29 14:51:55.291142 | orchestrator | 2025-08-29 14:51:55.291145 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 14:51:55.291149 | orchestrator | Friday 29 August 2025 14:51:43 +0000 (0:00:04.257) 0:01:04.183 ********* 2025-08-29 14:51:55.291153 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:55.291157 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:55.291161 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:55.291166 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:55.291169 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:55.291173 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:55.291177 | orchestrator | 2025-08-29 14:51:55.291181 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:55.291186 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:51:55.291192 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:51:55.291201 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:51:55.291395 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:51:55.291409 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:51:55.291413 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:51:55.291433 | orchestrator | 2025-08-29 14:51:55.291438 | orchestrator | 2025-08-29 14:51:55.291451 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:55.291456 | orchestrator | Friday 29 August 2025 14:51:53 +0000 (0:00:09.451) 0:01:13.634 ********* 2025-08-29 14:51:55.291460 | orchestrator | =============================================================================== 2025-08-29 14:51:55.291472 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.32s 2025-08-29 14:51:55.291476 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 14.77s 2025-08-29 14:51:55.291481 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.96s 2025-08-29 14:51:55.291485 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.26s 2025-08-29 14:51:55.291489 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.89s 2025-08-29 14:51:55.291493 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.93s 2025-08-29 14:51:55.291497 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.58s 2025-08-29 14:51:55.291507 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.17s 2025-08-29 14:51:55.291512 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.99s 2025-08-29 14:51:55.291517 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.72s 2025-08-29 14:51:55.291521 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.67s 2025-08-29 14:51:55.291526 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.54s 2025-08-29 14:51:55.291530 | orchestrator | module-load : Load modules ---------------------------------------------- 1.53s 2025-08-29 14:51:55.291533 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.48s 2025-08-29 14:51:55.291537 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.18s 2025-08-29 14:51:55.291541 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.12s 2025-08-29 14:51:55.291545 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.08s 2025-08-29 14:51:55.291549 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2025-08-29 14:51:55.291553 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:55.294347 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:55.297937 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:55.300140 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:51:55.300199 | orchestrator | 2025-08-29 14:51:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:58.341418 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:51:58.346204 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:51:58.347256 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:51:58.350354 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:51:58.350418 | orchestrator | 2025-08-29 14:51:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:01.408661 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:01.408975 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:01.410248 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:01.411006 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:01.411049 | orchestrator | 2025-08-29 14:52:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:04.464569 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:04.466090 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:04.469329 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:04.470343 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:04.470398 | orchestrator | 2025-08-29 14:52:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:07.550783 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:07.553664 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:07.554153 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:07.558443 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:07.558506 | orchestrator | 2025-08-29 14:52:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:10.603658 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:10.604352 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:10.605610 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:10.606331 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:10.606354 | orchestrator | 2025-08-29 14:52:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:13.650621 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:13.655020 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:13.655757 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:13.658229 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:13.658324 | orchestrator | 2025-08-29 14:52:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:16.698251 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:16.701253 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:16.703722 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:16.706348 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:16.706381 | orchestrator | 2025-08-29 14:52:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:19.746107 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:19.747406 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:19.752226 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:19.753363 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:19.756496 | orchestrator | 2025-08-29 14:52:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:22.786963 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:22.787888 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:22.789009 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:22.792065 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:22.792085 | orchestrator | 2025-08-29 14:52:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:25.825727 | orchestrator | 2025-08-29 14:52:25 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:25.828630 | orchestrator | 2025-08-29 14:52:25 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:25.830903 | orchestrator | 2025-08-29 14:52:25 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:25.832600 | orchestrator | 2025-08-29 14:52:25 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:25.832994 | orchestrator | 2025-08-29 14:52:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:28.868202 | orchestrator | 2025-08-29 14:52:28 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:28.870876 | orchestrator | 2025-08-29 14:52:28 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:28.873767 | orchestrator | 2025-08-29 14:52:28 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:28.876798 | orchestrator | 2025-08-29 14:52:28 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:28.876898 | orchestrator | 2025-08-29 14:52:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:31.919326 | orchestrator | 2025-08-29 14:52:31 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:31.920371 | orchestrator | 2025-08-29 14:52:31 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:31.921714 | orchestrator | 2025-08-29 14:52:31 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:31.924246 | orchestrator | 2025-08-29 14:52:31 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:31.924370 | orchestrator | 2025-08-29 14:52:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:34.965610 | orchestrator | 2025-08-29 14:52:34 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:34.965706 | orchestrator | 2025-08-29 14:52:34 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:34.966819 | orchestrator | 2025-08-29 14:52:34 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:34.967820 | orchestrator | 2025-08-29 14:52:34 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:34.967843 | orchestrator | 2025-08-29 14:52:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:38.006729 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:38.006829 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:38.008198 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:38.010736 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:38.010793 | orchestrator | 2025-08-29 14:52:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:41.068108 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:41.077956 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:41.079893 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:41.081713 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:41.082226 | orchestrator | 2025-08-29 14:52:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:44.133427 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:44.133549 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:44.133699 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:44.136220 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:44.136310 | orchestrator | 2025-08-29 14:52:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:47.181100 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:47.186112 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:47.189233 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:47.193890 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:47.193965 | orchestrator | 2025-08-29 14:52:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:50.246637 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:50.253000 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:50.255305 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:50.259838 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:50.259872 | orchestrator | 2025-08-29 14:52:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:53.302496 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:53.302997 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:53.305603 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:53.306641 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:53.307092 | orchestrator | 2025-08-29 14:52:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:56.350237 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:56.350593 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:56.353467 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:56.356886 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:56.356964 | orchestrator | 2025-08-29 14:52:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:59.401010 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:52:59.402199 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:52:59.403893 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:52:59.405224 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:52:59.405253 | orchestrator | 2025-08-29 14:52:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:02.457212 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:02.457443 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:02.460019 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:53:02.461563 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:02.462003 | orchestrator | 2025-08-29 14:53:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:05.517259 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:05.517840 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:05.520127 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:53:05.521064 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:05.521124 | orchestrator | 2025-08-29 14:53:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:08.629374 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:08.633967 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:08.635734 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:53:08.638893 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:08.638964 | orchestrator | 2025-08-29 14:53:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:11.676462 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:11.676545 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:11.679588 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:53:11.679646 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:11.679652 | orchestrator | 2025-08-29 14:53:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:14.750879 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:14.750982 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:14.750989 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:53:14.750994 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:14.751001 | orchestrator | 2025-08-29 14:53:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:17.790471 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:17.796819 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:17.796892 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:53:17.796899 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:17.796906 | orchestrator | 2025-08-29 14:53:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:20.831777 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:20.833153 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:20.840003 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:53:20.842882 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:20.843210 | orchestrator | 2025-08-29 14:53:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:23.882497 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:23.886427 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:23.889492 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:53:23.893027 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:23.893160 | orchestrator | 2025-08-29 14:53:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:26.940041 | orchestrator | 2025-08-29 14:53:26 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:26.940546 | orchestrator | 2025-08-29 14:53:26 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:26.943702 | orchestrator | 2025-08-29 14:53:26 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:53:26.944654 | orchestrator | 2025-08-29 14:53:26 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:26.944692 | orchestrator | 2025-08-29 14:53:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:29.982842 | orchestrator | 2025-08-29 14:53:29 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:29.983075 | orchestrator | 2025-08-29 14:53:29 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:29.984028 | orchestrator | 2025-08-29 14:53:29 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:53:29.985256 | orchestrator | 2025-08-29 14:53:29 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:29.985330 | orchestrator | 2025-08-29 14:53:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:33.037505 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:33.039418 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:33.041484 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:53:33.043388 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:33.043551 | orchestrator | 2025-08-29 14:53:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:36.081473 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:36.083464 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:36.086695 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:53:36.088393 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:36.088435 | orchestrator | 2025-08-29 14:53:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:39.120593 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:39.121970 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:39.123408 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state STARTED 2025-08-29 14:53:39.124320 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:39.124559 | orchestrator | 2025-08-29 14:53:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:42.164294 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:42.165767 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:42.168584 | orchestrator | 2025-08-29 14:53:42.168632 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task 4dbdbbf1-fdaf-4cb9-a8d3-e8093fcc4753 is in state SUCCESS 2025-08-29 14:53:42.170010 | orchestrator | 2025-08-29 14:53:42.170123 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-08-29 14:53:42.170135 | orchestrator | 2025-08-29 14:53:42.170149 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 14:53:42.170157 | orchestrator | Friday 29 August 2025 14:51:06 +0000 (0:00:00.113) 0:00:00.113 ********* 2025-08-29 14:53:42.170164 | orchestrator | ok: [localhost] => { 2025-08-29 14:53:42.170173 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-08-29 14:53:42.170180 | orchestrator | } 2025-08-29 14:53:42.170188 | orchestrator | 2025-08-29 14:53:42.170194 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-08-29 14:53:42.170201 | orchestrator | Friday 29 August 2025 14:51:06 +0000 (0:00:00.134) 0:00:00.248 ********* 2025-08-29 14:53:42.170233 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-08-29 14:53:42.170241 | orchestrator | ...ignoring 2025-08-29 14:53:42.170248 | orchestrator | 2025-08-29 14:53:42.170287 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-08-29 14:53:42.170296 | orchestrator | Friday 29 August 2025 14:51:09 +0000 (0:00:03.232) 0:00:03.481 ********* 2025-08-29 14:53:42.170303 | orchestrator | skipping: [localhost] 2025-08-29 14:53:42.170309 | orchestrator | 2025-08-29 14:53:42.170316 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-08-29 14:53:42.170323 | orchestrator | Friday 29 August 2025 14:51:09 +0000 (0:00:00.060) 0:00:03.541 ********* 2025-08-29 14:53:42.170331 | orchestrator | ok: [localhost] 2025-08-29 14:53:42.170337 | orchestrator | 2025-08-29 14:53:42.170344 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:53:42.170351 | orchestrator | 2025-08-29 14:53:42.170358 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:53:42.170365 | orchestrator | Friday 29 August 2025 14:51:10 +0000 (0:00:00.142) 0:00:03.683 ********* 2025-08-29 14:53:42.170372 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.170378 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.170384 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.170391 | orchestrator | 2025-08-29 14:53:42.170398 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:53:42.170404 | orchestrator | Friday 29 August 2025 14:51:10 +0000 (0:00:00.512) 0:00:04.196 ********* 2025-08-29 14:53:42.170410 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-08-29 14:53:42.170418 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-08-29 14:53:42.170425 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-08-29 14:53:42.170432 | orchestrator | 2025-08-29 14:53:42.170439 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-08-29 14:53:42.170445 | orchestrator | 2025-08-29 14:53:42.170452 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 14:53:42.170459 | orchestrator | Friday 29 August 2025 14:51:11 +0000 (0:00:00.603) 0:00:04.800 ********* 2025-08-29 14:53:42.170467 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:53:42.170474 | orchestrator | 2025-08-29 14:53:42.170481 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 14:53:42.170488 | orchestrator | Friday 29 August 2025 14:51:11 +0000 (0:00:00.584) 0:00:05.385 ********* 2025-08-29 14:53:42.170495 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.170502 | orchestrator | 2025-08-29 14:53:42.170509 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-08-29 14:53:42.170516 | orchestrator | Friday 29 August 2025 14:51:12 +0000 (0:00:01.048) 0:00:06.433 ********* 2025-08-29 14:53:42.170540 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.170548 | orchestrator | 2025-08-29 14:53:42.170554 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-08-29 14:53:42.170561 | orchestrator | Friday 29 August 2025 14:51:13 +0000 (0:00:00.396) 0:00:06.830 ********* 2025-08-29 14:53:42.170568 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.170575 | orchestrator | 2025-08-29 14:53:42.170582 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-08-29 14:53:42.170589 | orchestrator | Friday 29 August 2025 14:51:13 +0000 (0:00:00.744) 0:00:07.575 ********* 2025-08-29 14:53:42.170596 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.170603 | orchestrator | 2025-08-29 14:53:42.170610 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-08-29 14:53:42.170617 | orchestrator | Friday 29 August 2025 14:51:14 +0000 (0:00:00.421) 0:00:07.996 ********* 2025-08-29 14:53:42.170623 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.170630 | orchestrator | 2025-08-29 14:53:42.170636 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 14:53:42.170643 | orchestrator | Friday 29 August 2025 14:51:14 +0000 (0:00:00.486) 0:00:08.483 ********* 2025-08-29 14:53:42.170650 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-08-29 14:53:42.170656 | orchestrator | 2025-08-29 14:53:42.170663 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 14:53:42.170669 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:02.121) 0:00:10.604 ********* 2025-08-29 14:53:42.170675 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.170682 | orchestrator | 2025-08-29 14:53:42.170688 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-08-29 14:53:42.170695 | orchestrator | Friday 29 August 2025 14:51:18 +0000 (0:00:01.502) 0:00:12.107 ********* 2025-08-29 14:53:42.170701 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.170708 | orchestrator | 2025-08-29 14:53:42.170714 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-08-29 14:53:42.170720 | orchestrator | Friday 29 August 2025 14:51:19 +0000 (0:00:00.880) 0:00:12.987 ********* 2025-08-29 14:53:42.170727 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.170734 | orchestrator | 2025-08-29 14:53:42.170758 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-08-29 14:53:42.170769 | orchestrator | Friday 29 August 2025 14:51:20 +0000 (0:00:01.344) 0:00:14.332 ********* 2025-08-29 14:53:42.170780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:53:42.170791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:53:42.170807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:53:42.170814 | orchestrator | 2025-08-29 14:53:42.170821 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-08-29 14:53:42.170828 | orchestrator | Friday 29 August 2025 14:51:22 +0000 (0:00:01.921) 0:00:16.254 ********* 2025-08-29 14:53:42.170845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:53:42.170853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:53:42.170877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:53:42.170884 | orchestrator | 2025-08-29 14:53:42.170891 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-08-29 14:53:42.170896 | orchestrator | Friday 29 August 2025 14:51:25 +0000 (0:00:02.839) 0:00:19.093 ********* 2025-08-29 14:53:42.170902 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 14:53:42.170909 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 14:53:42.170915 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 14:53:42.170922 | orchestrator | 2025-08-29 14:53:42.170928 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-08-29 14:53:42.170934 | orchestrator | Friday 29 August 2025 14:51:27 +0000 (0:00:01.727) 0:00:20.820 ********* 2025-08-29 14:53:42.170939 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 14:53:42.170945 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 14:53:42.170951 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 14:53:42.170964 | orchestrator | 2025-08-29 14:53:42.170969 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-08-29 14:53:42.170975 | orchestrator | Friday 29 August 2025 14:51:29 +0000 (0:00:02.166) 0:00:22.987 ********* 2025-08-29 14:53:42.170981 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 14:53:42.170987 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 14:53:42.170993 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 14:53:42.171141 | orchestrator | 2025-08-29 14:53:42.171648 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-08-29 14:53:42.171715 | orchestrator | Friday 29 August 2025 14:51:31 +0000 (0:00:02.008) 0:00:24.995 ********* 2025-08-29 14:53:42.171760 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 14:53:42.171808 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 14:53:42.171821 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 14:53:42.171831 | orchestrator | 2025-08-29 14:53:42.171842 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-08-29 14:53:42.171853 | orchestrator | Friday 29 August 2025 14:51:33 +0000 (0:00:02.291) 0:00:27.287 ********* 2025-08-29 14:53:42.171863 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 14:53:42.171900 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 14:53:42.171910 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 14:53:42.171920 | orchestrator | 2025-08-29 14:53:42.171929 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-08-29 14:53:42.171940 | orchestrator | Friday 29 August 2025 14:51:35 +0000 (0:00:02.056) 0:00:29.343 ********* 2025-08-29 14:53:42.171950 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 14:53:42.171959 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 14:53:42.171969 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 14:53:42.171978 | orchestrator | 2025-08-29 14:53:42.171990 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 14:53:42.172000 | orchestrator | Friday 29 August 2025 14:51:37 +0000 (0:00:01.723) 0:00:31.067 ********* 2025-08-29 14:53:42.172012 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.172023 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.172033 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.172042 | orchestrator | 2025-08-29 14:53:42.172052 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-08-29 14:53:42.172063 | orchestrator | Friday 29 August 2025 14:51:38 +0000 (0:00:00.614) 0:00:31.681 ********* 2025-08-29 14:53:42.172084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:53:42.172100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:53:42.172135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:53:42.172160 | orchestrator | 2025-08-29 14:53:42.172172 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-08-29 14:53:42.172183 | orchestrator | Friday 29 August 2025 14:51:40 +0000 (0:00:02.083) 0:00:33.765 ********* 2025-08-29 14:53:42.172193 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.172227 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.172239 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.172250 | orchestrator | 2025-08-29 14:53:42.172263 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-08-29 14:53:42.172274 | orchestrator | Friday 29 August 2025 14:51:41 +0000 (0:00:01.112) 0:00:34.878 ********* 2025-08-29 14:53:42.172286 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.172298 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.172311 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.172323 | orchestrator | 2025-08-29 14:53:42.172333 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-08-29 14:53:42.172344 | orchestrator | Friday 29 August 2025 14:51:56 +0000 (0:00:14.896) 0:00:49.774 ********* 2025-08-29 14:53:42.172355 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.172367 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.172379 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.172391 | orchestrator | 2025-08-29 14:53:42.172401 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 14:53:42.172413 | orchestrator | 2025-08-29 14:53:42.172423 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 14:53:42.172434 | orchestrator | Friday 29 August 2025 14:51:56 +0000 (0:00:00.734) 0:00:50.509 ********* 2025-08-29 14:53:42.172445 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.172457 | orchestrator | 2025-08-29 14:53:42.172466 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 14:53:42.172476 | orchestrator | Friday 29 August 2025 14:51:57 +0000 (0:00:00.723) 0:00:51.232 ********* 2025-08-29 14:53:42.172487 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.172497 | orchestrator | 2025-08-29 14:53:42.172507 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 14:53:42.172516 | orchestrator | Friday 29 August 2025 14:51:57 +0000 (0:00:00.305) 0:00:51.537 ********* 2025-08-29 14:53:42.172527 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.172536 | orchestrator | 2025-08-29 14:53:42.172547 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 14:53:42.172556 | orchestrator | Friday 29 August 2025 14:51:59 +0000 (0:00:02.102) 0:00:53.639 ********* 2025-08-29 14:53:42.172567 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.172578 | orchestrator | 2025-08-29 14:53:42.172588 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 14:53:42.172597 | orchestrator | 2025-08-29 14:53:42.172607 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 14:53:42.172617 | orchestrator | Friday 29 August 2025 14:52:57 +0000 (0:00:57.785) 0:01:51.425 ********* 2025-08-29 14:53:42.172628 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.172638 | orchestrator | 2025-08-29 14:53:42.172647 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 14:53:42.172670 | orchestrator | Friday 29 August 2025 14:52:58 +0000 (0:00:00.652) 0:01:52.078 ********* 2025-08-29 14:53:42.172681 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.172693 | orchestrator | 2025-08-29 14:53:42.172704 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 14:53:42.172716 | orchestrator | Friday 29 August 2025 14:52:58 +0000 (0:00:00.246) 0:01:52.325 ********* 2025-08-29 14:53:42.172727 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.172738 | orchestrator | 2025-08-29 14:53:42.172748 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 14:53:42.172759 | orchestrator | Friday 29 August 2025 14:53:00 +0000 (0:00:01.907) 0:01:54.232 ********* 2025-08-29 14:53:42.172770 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.172781 | orchestrator | 2025-08-29 14:53:42.172791 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 14:53:42.172800 | orchestrator | 2025-08-29 14:53:42.172810 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 14:53:42.172820 | orchestrator | Friday 29 August 2025 14:53:17 +0000 (0:00:16.561) 0:02:10.793 ********* 2025-08-29 14:53:42.172829 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.172839 | orchestrator | 2025-08-29 14:53:42.172849 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 14:53:42.172860 | orchestrator | Friday 29 August 2025 14:53:17 +0000 (0:00:00.741) 0:02:11.534 ********* 2025-08-29 14:53:42.172870 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.172880 | orchestrator | 2025-08-29 14:53:42.172890 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 14:53:42.172899 | orchestrator | Friday 29 August 2025 14:53:18 +0000 (0:00:00.666) 0:02:12.201 ********* 2025-08-29 14:53:42.172909 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.172919 | orchestrator | 2025-08-29 14:53:42.172929 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 14:53:42.172951 | orchestrator | Friday 29 August 2025 14:53:20 +0000 (0:00:01.907) 0:02:14.108 ********* 2025-08-29 14:53:42.172961 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.172970 | orchestrator | 2025-08-29 14:53:42.172989 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-08-29 14:53:42.173000 | orchestrator | 2025-08-29 14:53:42.173011 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-08-29 14:53:42.173021 | orchestrator | Friday 29 August 2025 14:53:35 +0000 (0:00:15.465) 0:02:29.574 ********* 2025-08-29 14:53:42.173032 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:53:42.173042 | orchestrator | 2025-08-29 14:53:42.173052 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-08-29 14:53:42.173062 | orchestrator | Friday 29 August 2025 14:53:36 +0000 (0:00:00.716) 0:02:30.290 ********* 2025-08-29 14:53:42.173073 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 14:53:42.173084 | orchestrator | enable_outward_rabbitmq_True 2025-08-29 14:53:42.173094 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 14:53:42.173105 | orchestrator | outward_rabbitmq_restart 2025-08-29 14:53:42.173281 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.173299 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.173309 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.173320 | orchestrator | 2025-08-29 14:53:42.173331 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-08-29 14:53:42.173341 | orchestrator | skipping: no hosts matched 2025-08-29 14:53:42.173350 | orchestrator | 2025-08-29 14:53:42.173360 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-08-29 14:53:42.173370 | orchestrator | skipping: no hosts matched 2025-08-29 14:53:42.173380 | orchestrator | 2025-08-29 14:53:42.173390 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-08-29 14:53:42.173400 | orchestrator | skipping: no hosts matched 2025-08-29 14:53:42.173427 | orchestrator | 2025-08-29 14:53:42.173439 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:53:42.173451 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 14:53:42.173466 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 14:53:42.173474 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:53:42.173481 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:53:42.173487 | orchestrator | 2025-08-29 14:53:42.173494 | orchestrator | 2025-08-29 14:53:42.173501 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:53:42.173507 | orchestrator | Friday 29 August 2025 14:53:39 +0000 (0:00:02.709) 0:02:32.999 ********* 2025-08-29 14:53:42.173514 | orchestrator | =============================================================================== 2025-08-29 14:53:42.173521 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 89.81s 2025-08-29 14:53:42.173527 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------ 14.90s 2025-08-29 14:53:42.173534 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.92s 2025-08-29 14:53:42.173540 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.23s 2025-08-29 14:53:42.173547 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.84s 2025-08-29 14:53:42.173553 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.71s 2025-08-29 14:53:42.173560 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.29s 2025-08-29 14:53:42.173566 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.17s 2025-08-29 14:53:42.173573 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.12s 2025-08-29 14:53:42.173579 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.12s 2025-08-29 14:53:42.173586 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.08s 2025-08-29 14:53:42.173593 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.06s 2025-08-29 14:53:42.173599 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.01s 2025-08-29 14:53:42.173606 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.92s 2025-08-29 14:53:42.173612 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.73s 2025-08-29 14:53:42.173619 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.72s 2025-08-29 14:53:42.173625 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.50s 2025-08-29 14:53:42.173632 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.34s 2025-08-29 14:53:42.173638 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.22s 2025-08-29 14:53:42.173645 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.11s 2025-08-29 14:53:42.173652 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:42.173659 | orchestrator | 2025-08-29 14:53:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:45.219041 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:45.219540 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:45.220428 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:45.220464 | orchestrator | 2025-08-29 14:53:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:48.259972 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:48.260456 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:48.262126 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:48.262158 | orchestrator | 2025-08-29 14:53:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:51.310455 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:51.311594 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:51.316345 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:51.316403 | orchestrator | 2025-08-29 14:53:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:54.363916 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:54.365111 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:54.365706 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:54.365734 | orchestrator | 2025-08-29 14:53:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:57.411156 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:53:57.414524 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:53:57.417540 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:53:57.417618 | orchestrator | 2025-08-29 14:53:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:00.461508 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:00.462422 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:00.464341 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:00.464383 | orchestrator | 2025-08-29 14:54:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:03.506275 | orchestrator | 2025-08-29 14:54:03 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:03.507392 | orchestrator | 2025-08-29 14:54:03 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:03.509943 | orchestrator | 2025-08-29 14:54:03 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:03.509981 | orchestrator | 2025-08-29 14:54:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:06.551346 | orchestrator | 2025-08-29 14:54:06 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:06.552076 | orchestrator | 2025-08-29 14:54:06 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:06.553380 | orchestrator | 2025-08-29 14:54:06 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:06.553463 | orchestrator | 2025-08-29 14:54:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:09.605853 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:09.607435 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:09.608857 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:09.608900 | orchestrator | 2025-08-29 14:54:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:12.655615 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:12.657737 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:12.659085 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:12.659129 | orchestrator | 2025-08-29 14:54:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:15.713056 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:15.714570 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:15.715855 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:15.715940 | orchestrator | 2025-08-29 14:54:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:18.758119 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:18.759595 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:18.760777 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:18.761473 | orchestrator | 2025-08-29 14:54:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:21.809031 | orchestrator | 2025-08-29 14:54:21 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:21.810344 | orchestrator | 2025-08-29 14:54:21 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:21.811108 | orchestrator | 2025-08-29 14:54:21 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:21.811137 | orchestrator | 2025-08-29 14:54:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:24.852806 | orchestrator | 2025-08-29 14:54:24 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:24.854407 | orchestrator | 2025-08-29 14:54:24 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:24.855788 | orchestrator | 2025-08-29 14:54:24 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:24.856131 | orchestrator | 2025-08-29 14:54:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:27.904484 | orchestrator | 2025-08-29 14:54:27 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:27.906795 | orchestrator | 2025-08-29 14:54:27 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:27.909057 | orchestrator | 2025-08-29 14:54:27 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:27.909129 | orchestrator | 2025-08-29 14:54:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:30.952752 | orchestrator | 2025-08-29 14:54:30 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:30.954691 | orchestrator | 2025-08-29 14:54:30 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:30.956131 | orchestrator | 2025-08-29 14:54:30 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:30.957241 | orchestrator | 2025-08-29 14:54:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:34.007757 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:34.007875 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:34.007882 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:34.007888 | orchestrator | 2025-08-29 14:54:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:37.060011 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:37.064283 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:37.065825 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:37.065854 | orchestrator | 2025-08-29 14:54:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:40.102308 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:40.103957 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:40.106130 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state STARTED 2025-08-29 14:54:40.106216 | orchestrator | 2025-08-29 14:54:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:43.148970 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:43.149069 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:43.150824 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task 0926a33f-aa25-4f81-8155-baca2cffcf59 is in state SUCCESS 2025-08-29 14:54:43.154911 | orchestrator | 2025-08-29 14:54:43.154965 | orchestrator | 2025-08-29 14:54:43.154974 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:54:43.154982 | orchestrator | 2025-08-29 14:54:43.154990 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:54:43.154997 | orchestrator | Friday 29 August 2025 14:52:00 +0000 (0:00:00.523) 0:00:00.523 ********* 2025-08-29 14:54:43.155004 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:54:43.155013 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:54:43.155020 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:54:43.155026 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.155032 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.155039 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.155046 | orchestrator | 2025-08-29 14:54:43.155052 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:54:43.155059 | orchestrator | Friday 29 August 2025 14:52:02 +0000 (0:00:01.668) 0:00:02.192 ********* 2025-08-29 14:54:43.155066 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-08-29 14:54:43.155073 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-08-29 14:54:43.155079 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-08-29 14:54:43.155104 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-08-29 14:54:43.155111 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-08-29 14:54:43.155118 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-08-29 14:54:43.155124 | orchestrator | 2025-08-29 14:54:43.155130 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-08-29 14:54:43.155136 | orchestrator | 2025-08-29 14:54:43.155142 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-08-29 14:54:43.155147 | orchestrator | Friday 29 August 2025 14:52:03 +0000 (0:00:01.678) 0:00:03.870 ********* 2025-08-29 14:54:43.155155 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:54:43.155181 | orchestrator | 2025-08-29 14:54:43.155188 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-08-29 14:54:43.155195 | orchestrator | Friday 29 August 2025 14:52:05 +0000 (0:00:01.598) 0:00:05.469 ********* 2025-08-29 14:54:43.155204 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155256 | orchestrator | 2025-08-29 14:54:43.155273 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-08-29 14:54:43.155279 | orchestrator | Friday 29 August 2025 14:52:06 +0000 (0:00:01.604) 0:00:07.074 ********* 2025-08-29 14:54:43.155293 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155314 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155335 | orchestrator | 2025-08-29 14:54:43.155342 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-08-29 14:54:43.155348 | orchestrator | Friday 29 August 2025 14:52:09 +0000 (0:00:02.165) 0:00:09.240 ********* 2025-08-29 14:54:43.155355 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155409 | orchestrator | 2025-08-29 14:54:43.155416 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-08-29 14:54:43.155423 | orchestrator | Friday 29 August 2025 14:52:10 +0000 (0:00:01.496) 0:00:10.737 ********* 2025-08-29 14:54:43.155429 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155436 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155477 | orchestrator | 2025-08-29 14:54:43.155488 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-08-29 14:54:43.155495 | orchestrator | Friday 29 August 2025 14:52:12 +0000 (0:00:01.833) 0:00:12.570 ********* 2025-08-29 14:54:43.155502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155515 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.155541 | orchestrator | 2025-08-29 14:54:43.155547 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-08-29 14:54:43.155554 | orchestrator | Friday 29 August 2025 14:52:13 +0000 (0:00:01.418) 0:00:13.989 ********* 2025-08-29 14:54:43.155560 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:43.155566 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:43.155572 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:43.155577 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:43.155583 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:43.155589 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:43.155602 | orchestrator | 2025-08-29 14:54:43.155608 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-08-29 14:54:43.155615 | orchestrator | Friday 29 August 2025 14:52:17 +0000 (0:00:03.204) 0:00:17.193 ********* 2025-08-29 14:54:43.155622 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-08-29 14:54:43.155632 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-08-29 14:54:43.155639 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-08-29 14:54:43.155646 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-08-29 14:54:43.155653 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-08-29 14:54:43.155659 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:54:43.155666 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-08-29 14:54:43.155672 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:54:43.155683 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:54:43.155690 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:54:43.155697 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:54:43.155704 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:54:43.155712 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:54:43.155718 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:54:43.155725 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:54:43.155731 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:54:43.155738 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:54:43.155746 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:54:43.155753 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:54:43.155760 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:54:43.155766 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:54:43.155773 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:54:43.155780 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:54:43.155786 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:54:43.155793 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:54:43.155800 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:54:43.155806 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:54:43.155813 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:54:43.155825 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:54:43.155833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:54:43.155839 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:54:43.155845 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:54:43.155852 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:54:43.155859 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 14:54:43.155866 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:54:43.155872 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:54:43.155879 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:54:43.155885 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 14:54:43.155892 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 14:54:43.155899 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-08-29 14:54:43.155906 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 14:54:43.155913 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 14:54:43.155919 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 14:54:43.155926 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-08-29 14:54:43.155937 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-08-29 14:54:43.155944 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 14:54:43.155950 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-08-29 14:54:43.155957 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-08-29 14:54:43.155963 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-08-29 14:54:43.155970 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 14:54:43.155976 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 14:54:43.155983 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 14:54:43.155989 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 14:54:43.155996 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 14:54:43.156003 | orchestrator | 2025-08-29 14:54:43.156010 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:54:43.156016 | orchestrator | Friday 29 August 2025 14:52:35 +0000 (0:00:18.909) 0:00:36.103 ********* 2025-08-29 14:54:43.156027 | orchestrator | 2025-08-29 14:54:43.156034 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:54:43.156041 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:00.315) 0:00:36.418 ********* 2025-08-29 14:54:43.156048 | orchestrator | 2025-08-29 14:54:43.156054 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:54:43.156061 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:00.070) 0:00:36.489 ********* 2025-08-29 14:54:43.156068 | orchestrator | 2025-08-29 14:54:43.156074 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:54:43.156081 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:00.063) 0:00:36.552 ********* 2025-08-29 14:54:43.156088 | orchestrator | 2025-08-29 14:54:43.156094 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:54:43.156123 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:00.066) 0:00:36.618 ********* 2025-08-29 14:54:43.156131 | orchestrator | 2025-08-29 14:54:43.156138 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:54:43.156145 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:00.066) 0:00:36.684 ********* 2025-08-29 14:54:43.156152 | orchestrator | 2025-08-29 14:54:43.156201 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-08-29 14:54:43.156210 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:00.062) 0:00:36.747 ********* 2025-08-29 14:54:43.156217 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:54:43.156224 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:54:43.156229 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:54:43.156235 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.156240 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.156246 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.156252 | orchestrator | 2025-08-29 14:54:43.156258 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-08-29 14:54:43.156265 | orchestrator | Friday 29 August 2025 14:52:38 +0000 (0:00:01.679) 0:00:38.426 ********* 2025-08-29 14:54:43.156272 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:43.156279 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:43.156285 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:43.156292 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:43.156298 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:43.156305 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:43.156311 | orchestrator | 2025-08-29 14:54:43.156318 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-08-29 14:54:43.156325 | orchestrator | 2025-08-29 14:54:43.156331 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 14:54:43.156338 | orchestrator | Friday 29 August 2025 14:53:12 +0000 (0:00:34.286) 0:01:12.712 ********* 2025-08-29 14:54:43.156344 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:54:43.156351 | orchestrator | 2025-08-29 14:54:43.156358 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 14:54:43.156369 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:00.984) 0:01:13.696 ********* 2025-08-29 14:54:43.156376 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:54:43.156383 | orchestrator | 2025-08-29 14:54:43.156389 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-08-29 14:54:43.156396 | orchestrator | Friday 29 August 2025 14:53:14 +0000 (0:00:01.284) 0:01:14.981 ********* 2025-08-29 14:54:43.156402 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.156409 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.156416 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.156423 | orchestrator | 2025-08-29 14:54:43.156430 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-08-29 14:54:43.156444 | orchestrator | Friday 29 August 2025 14:53:16 +0000 (0:00:01.514) 0:01:16.495 ********* 2025-08-29 14:54:43.156450 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.156457 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.156464 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.156477 | orchestrator | 2025-08-29 14:54:43.156484 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-08-29 14:54:43.156491 | orchestrator | Friday 29 August 2025 14:53:17 +0000 (0:00:00.866) 0:01:17.362 ********* 2025-08-29 14:54:43.156497 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.156504 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.156514 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.156520 | orchestrator | 2025-08-29 14:54:43.156526 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-08-29 14:54:43.156533 | orchestrator | Friday 29 August 2025 14:53:17 +0000 (0:00:00.465) 0:01:17.828 ********* 2025-08-29 14:54:43.156540 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.156546 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.156553 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.156559 | orchestrator | 2025-08-29 14:54:43.156564 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-08-29 14:54:43.156570 | orchestrator | Friday 29 August 2025 14:53:18 +0000 (0:00:00.760) 0:01:18.588 ********* 2025-08-29 14:54:43.156576 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.156582 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.156588 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.156595 | orchestrator | 2025-08-29 14:54:43.156602 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-08-29 14:54:43.156608 | orchestrator | Friday 29 August 2025 14:53:19 +0000 (0:00:00.958) 0:01:19.546 ********* 2025-08-29 14:54:43.156615 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.156622 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.156628 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.156634 | orchestrator | 2025-08-29 14:54:43.156641 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-08-29 14:54:43.156647 | orchestrator | Friday 29 August 2025 14:53:19 +0000 (0:00:00.367) 0:01:19.914 ********* 2025-08-29 14:54:43.156653 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.156660 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.156666 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.156672 | orchestrator | 2025-08-29 14:54:43.156678 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-08-29 14:54:43.156685 | orchestrator | Friday 29 August 2025 14:53:20 +0000 (0:00:00.421) 0:01:20.335 ********* 2025-08-29 14:54:43.156692 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.156698 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.156705 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.156712 | orchestrator | 2025-08-29 14:54:43.156718 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-08-29 14:54:43.156725 | orchestrator | Friday 29 August 2025 14:53:20 +0000 (0:00:00.410) 0:01:20.746 ********* 2025-08-29 14:54:43.156732 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.156738 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.156744 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.156751 | orchestrator | 2025-08-29 14:54:43.156757 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-08-29 14:54:43.156764 | orchestrator | Friday 29 August 2025 14:53:21 +0000 (0:00:00.628) 0:01:21.375 ********* 2025-08-29 14:54:43.156770 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.156777 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.156784 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.156791 | orchestrator | 2025-08-29 14:54:43.156797 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-08-29 14:54:43.156804 | orchestrator | Friday 29 August 2025 14:53:21 +0000 (0:00:00.381) 0:01:21.756 ********* 2025-08-29 14:54:43.156821 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.156828 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.156835 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.156841 | orchestrator | 2025-08-29 14:54:43.156848 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-08-29 14:54:43.156855 | orchestrator | Friday 29 August 2025 14:53:21 +0000 (0:00:00.382) 0:01:22.139 ********* 2025-08-29 14:54:43.156862 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.156868 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.156875 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.156881 | orchestrator | 2025-08-29 14:54:43.156888 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-08-29 14:54:43.156894 | orchestrator | Friday 29 August 2025 14:53:22 +0000 (0:00:00.325) 0:01:22.464 ********* 2025-08-29 14:54:43.156901 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.156907 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.156914 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.156920 | orchestrator | 2025-08-29 14:54:43.156927 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-08-29 14:54:43.156934 | orchestrator | Friday 29 August 2025 14:53:22 +0000 (0:00:00.385) 0:01:22.850 ********* 2025-08-29 14:54:43.156941 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.156947 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.156953 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.156960 | orchestrator | 2025-08-29 14:54:43.156967 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-08-29 14:54:43.156973 | orchestrator | Friday 29 August 2025 14:53:23 +0000 (0:00:00.619) 0:01:23.470 ********* 2025-08-29 14:54:43.156984 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.156991 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.156998 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.157004 | orchestrator | 2025-08-29 14:54:43.157011 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-08-29 14:54:43.157017 | orchestrator | Friday 29 August 2025 14:53:23 +0000 (0:00:00.345) 0:01:23.815 ********* 2025-08-29 14:54:43.157024 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.157031 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.157037 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.157044 | orchestrator | 2025-08-29 14:54:43.157050 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-08-29 14:54:43.157057 | orchestrator | Friday 29 August 2025 14:53:23 +0000 (0:00:00.320) 0:01:24.136 ********* 2025-08-29 14:54:43.157064 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.157070 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.157083 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.157089 | orchestrator | 2025-08-29 14:54:43.157096 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 14:54:43.157101 | orchestrator | Friday 29 August 2025 14:53:24 +0000 (0:00:00.332) 0:01:24.468 ********* 2025-08-29 14:54:43.157108 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:54:43.157114 | orchestrator | 2025-08-29 14:54:43.157120 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-08-29 14:54:43.157127 | orchestrator | Friday 29 August 2025 14:53:25 +0000 (0:00:00.911) 0:01:25.379 ********* 2025-08-29 14:54:43.157133 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.157139 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.157146 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.157152 | orchestrator | 2025-08-29 14:54:43.157175 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-08-29 14:54:43.157182 | orchestrator | Friday 29 August 2025 14:53:25 +0000 (0:00:00.574) 0:01:25.953 ********* 2025-08-29 14:54:43.157188 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.157195 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.157207 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.157214 | orchestrator | 2025-08-29 14:54:43.157220 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-08-29 14:54:43.157227 | orchestrator | Friday 29 August 2025 14:53:26 +0000 (0:00:00.549) 0:01:26.504 ********* 2025-08-29 14:54:43.157233 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.157240 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.157246 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.157252 | orchestrator | 2025-08-29 14:54:43.157259 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-08-29 14:54:43.157266 | orchestrator | Friday 29 August 2025 14:53:27 +0000 (0:00:00.839) 0:01:27.343 ********* 2025-08-29 14:54:43.157272 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.157278 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.157285 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.157291 | orchestrator | 2025-08-29 14:54:43.157297 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-08-29 14:54:43.157304 | orchestrator | Friday 29 August 2025 14:53:27 +0000 (0:00:00.439) 0:01:27.783 ********* 2025-08-29 14:54:43.157310 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.157316 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.157322 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.157328 | orchestrator | 2025-08-29 14:54:43.157334 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-08-29 14:54:43.157340 | orchestrator | Friday 29 August 2025 14:53:28 +0000 (0:00:00.518) 0:01:28.301 ********* 2025-08-29 14:54:43.157347 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.157353 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.157360 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.157366 | orchestrator | 2025-08-29 14:54:43.157372 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-08-29 14:54:43.157379 | orchestrator | Friday 29 August 2025 14:53:28 +0000 (0:00:00.410) 0:01:28.711 ********* 2025-08-29 14:54:43.157385 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.157391 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.157398 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.157404 | orchestrator | 2025-08-29 14:54:43.157411 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-08-29 14:54:43.157417 | orchestrator | Friday 29 August 2025 14:53:29 +0000 (0:00:00.672) 0:01:29.384 ********* 2025-08-29 14:54:43.157423 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.157429 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.157436 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.157441 | orchestrator | 2025-08-29 14:54:43.157448 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 14:54:43.157454 | orchestrator | Friday 29 August 2025 14:53:29 +0000 (0:00:00.351) 0:01:29.736 ********* 2025-08-29 14:54:43.157462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157535 | orchestrator | 2025-08-29 14:54:43.157540 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 14:54:43.157545 | orchestrator | Friday 29 August 2025 14:53:31 +0000 (0:00:01.566) 0:01:31.302 ********* 2025-08-29 14:54:43.157551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157616 | orchestrator | 2025-08-29 14:54:43.157622 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 14:54:43.157629 | orchestrator | Friday 29 August 2025 14:53:35 +0000 (0:00:04.157) 0:01:35.460 ********* 2025-08-29 14:54:43.157635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.157769 | orchestrator | 2025-08-29 14:54:43.157776 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:54:43.157782 | orchestrator | Friday 29 August 2025 14:53:37 +0000 (0:00:02.259) 0:01:37.719 ********* 2025-08-29 14:54:43.157788 | orchestrator | 2025-08-29 14:54:43.157795 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:54:43.157801 | orchestrator | Friday 29 August 2025 14:53:37 +0000 (0:00:00.376) 0:01:38.096 ********* 2025-08-29 14:54:43.157807 | orchestrator | 2025-08-29 14:54:43.157813 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:54:43.157818 | orchestrator | Friday 29 August 2025 14:53:37 +0000 (0:00:00.069) 0:01:38.166 ********* 2025-08-29 14:54:43.157823 | orchestrator | 2025-08-29 14:54:43.157828 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 14:54:43.157834 | orchestrator | Friday 29 August 2025 14:53:38 +0000 (0:00:00.068) 0:01:38.235 ********* 2025-08-29 14:54:43.157839 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:43.157846 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:43.157852 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:43.157858 | orchestrator | 2025-08-29 14:54:43.157863 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 14:54:43.157870 | orchestrator | Friday 29 August 2025 14:53:46 +0000 (0:00:08.714) 0:01:46.949 ********* 2025-08-29 14:54:43.157881 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:43.157886 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:43.157892 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:43.157898 | orchestrator | 2025-08-29 14:54:43.157904 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 14:54:43.157910 | orchestrator | Friday 29 August 2025 14:53:54 +0000 (0:00:07.665) 0:01:54.614 ********* 2025-08-29 14:54:43.157915 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:43.157921 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:43.157927 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:43.157933 | orchestrator | 2025-08-29 14:54:43.157939 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 14:54:43.157946 | orchestrator | Friday 29 August 2025 14:54:02 +0000 (0:00:07.786) 0:02:02.401 ********* 2025-08-29 14:54:43.157952 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.157958 | orchestrator | 2025-08-29 14:54:43.157964 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 14:54:43.157971 | orchestrator | Friday 29 August 2025 14:54:02 +0000 (0:00:00.473) 0:02:02.875 ********* 2025-08-29 14:54:43.157978 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.157984 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.157991 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.157997 | orchestrator | 2025-08-29 14:54:43.158003 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 14:54:43.158010 | orchestrator | Friday 29 August 2025 14:54:03 +0000 (0:00:00.956) 0:02:03.832 ********* 2025-08-29 14:54:43.158070 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.158078 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.158085 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:43.158092 | orchestrator | 2025-08-29 14:54:43.158099 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 14:54:43.158110 | orchestrator | Friday 29 August 2025 14:54:04 +0000 (0:00:00.713) 0:02:04.545 ********* 2025-08-29 14:54:43.158118 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.158125 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.158131 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.158137 | orchestrator | 2025-08-29 14:54:43.158144 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 14:54:43.158150 | orchestrator | Friday 29 August 2025 14:54:05 +0000 (0:00:00.843) 0:02:05.388 ********* 2025-08-29 14:54:43.158156 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.158187 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.158194 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:43.158200 | orchestrator | 2025-08-29 14:54:43.158206 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 14:54:43.158212 | orchestrator | Friday 29 August 2025 14:54:05 +0000 (0:00:00.735) 0:02:06.124 ********* 2025-08-29 14:54:43.158218 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.158224 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.158238 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.158245 | orchestrator | 2025-08-29 14:54:43.158251 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 14:54:43.158258 | orchestrator | Friday 29 August 2025 14:54:07 +0000 (0:00:01.124) 0:02:07.249 ********* 2025-08-29 14:54:43.158265 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.158271 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.158278 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.158285 | orchestrator | 2025-08-29 14:54:43.158292 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-08-29 14:54:43.158298 | orchestrator | Friday 29 August 2025 14:54:07 +0000 (0:00:00.721) 0:02:07.971 ********* 2025-08-29 14:54:43.158305 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.158312 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.158318 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.158324 | orchestrator | 2025-08-29 14:54:43.158331 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 14:54:43.158344 | orchestrator | Friday 29 August 2025 14:54:08 +0000 (0:00:00.302) 0:02:08.273 ********* 2025-08-29 14:54:43.158352 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158359 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158367 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158374 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158382 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158388 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158395 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158405 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158417 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158424 | orchestrator | 2025-08-29 14:54:43.158431 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 14:54:43.158438 | orchestrator | Friday 29 August 2025 14:54:09 +0000 (0:00:01.424) 0:02:09.698 ********* 2025-08-29 14:54:43.158450 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158457 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158464 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158472 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158492 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158515 | orchestrator | 2025-08-29 14:54:43.158522 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 14:54:43.158529 | orchestrator | Friday 29 August 2025 14:54:13 +0000 (0:00:04.074) 0:02:13.772 ********* 2025-08-29 14:54:43.158549 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158556 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158563 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158575 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158594 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:43.158608 | orchestrator | 2025-08-29 14:54:43.158614 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:54:43.158624 | orchestrator | Friday 29 August 2025 14:54:16 +0000 (0:00:03.218) 0:02:16.991 ********* 2025-08-29 14:54:43.158631 | orchestrator | 2025-08-29 14:54:43.158637 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:54:43.158648 | orchestrator | Friday 29 August 2025 14:54:16 +0000 (0:00:00.069) 0:02:17.061 ********* 2025-08-29 14:54:43.158654 | orchestrator | 2025-08-29 14:54:43.158660 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:54:43.158666 | orchestrator | Friday 29 August 2025 14:54:16 +0000 (0:00:00.070) 0:02:17.131 ********* 2025-08-29 14:54:43.158672 | orchestrator | 2025-08-29 14:54:43.158678 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 14:54:43.158684 | orchestrator | Friday 29 August 2025 14:54:17 +0000 (0:00:00.067) 0:02:17.198 ********* 2025-08-29 14:54:43.158690 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:43.158696 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:43.158702 | orchestrator | 2025-08-29 14:54:43.158712 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 14:54:43.158719 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:00:06.246) 0:02:23.445 ********* 2025-08-29 14:54:43.158726 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:43.158732 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:43.158739 | orchestrator | 2025-08-29 14:54:43.158744 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 14:54:43.158750 | orchestrator | Friday 29 August 2025 14:54:29 +0000 (0:00:06.309) 0:02:29.755 ********* 2025-08-29 14:54:43.158756 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:43.158762 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:43.158768 | orchestrator | 2025-08-29 14:54:43.158774 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 14:54:43.158780 | orchestrator | Friday 29 August 2025 14:54:36 +0000 (0:00:07.408) 0:02:37.163 ********* 2025-08-29 14:54:43.158786 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:43.158793 | orchestrator | 2025-08-29 14:54:43.158799 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 14:54:43.158806 | orchestrator | Friday 29 August 2025 14:54:37 +0000 (0:00:00.138) 0:02:37.301 ********* 2025-08-29 14:54:43.158811 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.158817 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.158823 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.158829 | orchestrator | 2025-08-29 14:54:43.158835 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 14:54:43.158841 | orchestrator | Friday 29 August 2025 14:54:37 +0000 (0:00:00.787) 0:02:38.088 ********* 2025-08-29 14:54:43.158847 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.158852 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.158858 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:43.158864 | orchestrator | 2025-08-29 14:54:43.158870 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 14:54:43.158876 | orchestrator | Friday 29 August 2025 14:54:38 +0000 (0:00:00.698) 0:02:38.787 ********* 2025-08-29 14:54:43.158882 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.158887 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.158893 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.158899 | orchestrator | 2025-08-29 14:54:43.158904 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 14:54:43.158911 | orchestrator | Friday 29 August 2025 14:54:39 +0000 (0:00:00.861) 0:02:39.648 ********* 2025-08-29 14:54:43.158917 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:43.158923 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:43.158930 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:43.158936 | orchestrator | 2025-08-29 14:54:43.158942 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 14:54:43.158948 | orchestrator | Friday 29 August 2025 14:54:40 +0000 (0:00:00.911) 0:02:40.560 ********* 2025-08-29 14:54:43.158955 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.158961 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.158968 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.158974 | orchestrator | 2025-08-29 14:54:43.158988 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 14:54:43.158995 | orchestrator | Friday 29 August 2025 14:54:41 +0000 (0:00:00.890) 0:02:41.451 ********* 2025-08-29 14:54:43.159002 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.159009 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.159016 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.159022 | orchestrator | 2025-08-29 14:54:43.159028 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:54:43.159035 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 14:54:43.159042 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 14:54:43.159048 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 14:54:43.159054 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:54:43.159060 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:54:43.159066 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:54:43.159072 | orchestrator | 2025-08-29 14:54:43.159078 | orchestrator | 2025-08-29 14:54:43.159084 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:54:43.159091 | orchestrator | Friday 29 August 2025 14:54:42 +0000 (0:00:00.884) 0:02:42.335 ********* 2025-08-29 14:54:43.159101 | orchestrator | =============================================================================== 2025-08-29 14:54:43.159108 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.29s 2025-08-29 14:54:43.159115 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.91s 2025-08-29 14:54:43.159120 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.19s 2025-08-29 14:54:43.159126 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.96s 2025-08-29 14:54:43.159133 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.98s 2025-08-29 14:54:43.159138 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.16s 2025-08-29 14:54:43.159143 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.07s 2025-08-29 14:54:43.159154 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.22s 2025-08-29 14:54:43.159179 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.21s 2025-08-29 14:54:43.159186 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.26s 2025-08-29 14:54:43.159192 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.17s 2025-08-29 14:54:43.159197 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.83s 2025-08-29 14:54:43.159203 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.68s 2025-08-29 14:54:43.159210 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.68s 2025-08-29 14:54:43.159216 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.67s 2025-08-29 14:54:43.159221 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.60s 2025-08-29 14:54:43.159226 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.60s 2025-08-29 14:54:43.159232 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.57s 2025-08-29 14:54:43.159239 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.51s 2025-08-29 14:54:43.159252 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.50s 2025-08-29 14:54:43.159259 | orchestrator | 2025-08-29 14:54:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:46.203499 | orchestrator | 2025-08-29 14:54:46 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:46.203649 | orchestrator | 2025-08-29 14:54:46 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:46.203676 | orchestrator | 2025-08-29 14:54:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:49.252312 | orchestrator | 2025-08-29 14:54:49 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:49.254915 | orchestrator | 2025-08-29 14:54:49 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:49.254966 | orchestrator | 2025-08-29 14:54:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:52.299030 | orchestrator | 2025-08-29 14:54:52 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:52.299301 | orchestrator | 2025-08-29 14:54:52 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:52.299381 | orchestrator | 2025-08-29 14:54:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:55.337022 | orchestrator | 2025-08-29 14:54:55 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:55.343308 | orchestrator | 2025-08-29 14:54:55 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:55.343393 | orchestrator | 2025-08-29 14:54:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:58.382522 | orchestrator | 2025-08-29 14:54:58 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:54:58.382594 | orchestrator | 2025-08-29 14:54:58 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:54:58.382600 | orchestrator | 2025-08-29 14:54:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:01.428655 | orchestrator | 2025-08-29 14:55:01 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:01.430868 | orchestrator | 2025-08-29 14:55:01 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:01.430907 | orchestrator | 2025-08-29 14:55:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:04.477611 | orchestrator | 2025-08-29 14:55:04 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:04.478944 | orchestrator | 2025-08-29 14:55:04 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:04.479010 | orchestrator | 2025-08-29 14:55:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:07.535003 | orchestrator | 2025-08-29 14:55:07 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:07.535258 | orchestrator | 2025-08-29 14:55:07 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:07.535275 | orchestrator | 2025-08-29 14:55:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:10.584878 | orchestrator | 2025-08-29 14:55:10 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:10.586514 | orchestrator | 2025-08-29 14:55:10 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:10.586558 | orchestrator | 2025-08-29 14:55:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:13.622642 | orchestrator | 2025-08-29 14:55:13 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:13.625833 | orchestrator | 2025-08-29 14:55:13 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:13.625925 | orchestrator | 2025-08-29 14:55:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:16.676677 | orchestrator | 2025-08-29 14:55:16 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:16.678067 | orchestrator | 2025-08-29 14:55:16 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:16.678325 | orchestrator | 2025-08-29 14:55:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:19.716060 | orchestrator | 2025-08-29 14:55:19 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:19.716895 | orchestrator | 2025-08-29 14:55:19 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:19.716949 | orchestrator | 2025-08-29 14:55:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:22.771030 | orchestrator | 2025-08-29 14:55:22 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:22.772487 | orchestrator | 2025-08-29 14:55:22 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:22.772527 | orchestrator | 2025-08-29 14:55:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:25.813990 | orchestrator | 2025-08-29 14:55:25 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:25.816304 | orchestrator | 2025-08-29 14:55:25 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:25.816342 | orchestrator | 2025-08-29 14:55:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:28.874451 | orchestrator | 2025-08-29 14:55:28 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:28.877088 | orchestrator | 2025-08-29 14:55:28 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:28.877317 | orchestrator | 2025-08-29 14:55:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:31.925199 | orchestrator | 2025-08-29 14:55:31 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:31.927087 | orchestrator | 2025-08-29 14:55:31 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:31.927165 | orchestrator | 2025-08-29 14:55:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:34.976939 | orchestrator | 2025-08-29 14:55:34 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:34.979701 | orchestrator | 2025-08-29 14:55:34 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:34.979776 | orchestrator | 2025-08-29 14:55:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:38.023975 | orchestrator | 2025-08-29 14:55:38 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:38.025089 | orchestrator | 2025-08-29 14:55:38 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:38.025635 | orchestrator | 2025-08-29 14:55:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:41.073495 | orchestrator | 2025-08-29 14:55:41 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:41.074211 | orchestrator | 2025-08-29 14:55:41 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:41.074255 | orchestrator | 2025-08-29 14:55:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:44.106848 | orchestrator | 2025-08-29 14:55:44 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:44.109029 | orchestrator | 2025-08-29 14:55:44 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:44.109190 | orchestrator | 2025-08-29 14:55:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:47.146422 | orchestrator | 2025-08-29 14:55:47 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:47.146550 | orchestrator | 2025-08-29 14:55:47 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:47.146567 | orchestrator | 2025-08-29 14:55:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:50.179486 | orchestrator | 2025-08-29 14:55:50 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:50.180569 | orchestrator | 2025-08-29 14:55:50 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:50.180867 | orchestrator | 2025-08-29 14:55:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:53.221206 | orchestrator | 2025-08-29 14:55:53 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:53.222385 | orchestrator | 2025-08-29 14:55:53 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:53.222448 | orchestrator | 2025-08-29 14:55:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:56.271336 | orchestrator | 2025-08-29 14:55:56 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:56.273251 | orchestrator | 2025-08-29 14:55:56 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:56.273285 | orchestrator | 2025-08-29 14:55:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:59.336813 | orchestrator | 2025-08-29 14:55:59 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:55:59.336909 | orchestrator | 2025-08-29 14:55:59 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:55:59.336918 | orchestrator | 2025-08-29 14:55:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:02.392550 | orchestrator | 2025-08-29 14:56:02 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:02.394305 | orchestrator | 2025-08-29 14:56:02 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:02.394348 | orchestrator | 2025-08-29 14:56:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:05.438658 | orchestrator | 2025-08-29 14:56:05 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:05.439436 | orchestrator | 2025-08-29 14:56:05 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:05.439473 | orchestrator | 2025-08-29 14:56:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:08.495663 | orchestrator | 2025-08-29 14:56:08 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:08.498436 | orchestrator | 2025-08-29 14:56:08 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:08.498555 | orchestrator | 2025-08-29 14:56:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:11.545767 | orchestrator | 2025-08-29 14:56:11 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:11.546824 | orchestrator | 2025-08-29 14:56:11 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:11.546932 | orchestrator | 2025-08-29 14:56:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:14.599421 | orchestrator | 2025-08-29 14:56:14 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:14.601339 | orchestrator | 2025-08-29 14:56:14 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:14.601394 | orchestrator | 2025-08-29 14:56:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:17.647755 | orchestrator | 2025-08-29 14:56:17 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:17.648907 | orchestrator | 2025-08-29 14:56:17 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:17.648951 | orchestrator | 2025-08-29 14:56:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:20.691342 | orchestrator | 2025-08-29 14:56:20 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:20.691690 | orchestrator | 2025-08-29 14:56:20 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:20.691899 | orchestrator | 2025-08-29 14:56:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:23.728946 | orchestrator | 2025-08-29 14:56:23 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:23.729324 | orchestrator | 2025-08-29 14:56:23 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:23.729347 | orchestrator | 2025-08-29 14:56:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:26.774302 | orchestrator | 2025-08-29 14:56:26 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:26.775403 | orchestrator | 2025-08-29 14:56:26 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:26.775561 | orchestrator | 2025-08-29 14:56:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:29.827203 | orchestrator | 2025-08-29 14:56:29 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:29.829259 | orchestrator | 2025-08-29 14:56:29 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:29.829622 | orchestrator | 2025-08-29 14:56:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:32.874052 | orchestrator | 2025-08-29 14:56:32 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:32.874923 | orchestrator | 2025-08-29 14:56:32 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:32.875557 | orchestrator | 2025-08-29 14:56:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:35.919659 | orchestrator | 2025-08-29 14:56:35 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:35.919963 | orchestrator | 2025-08-29 14:56:35 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:35.919981 | orchestrator | 2025-08-29 14:56:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:38.960526 | orchestrator | 2025-08-29 14:56:38 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:38.961421 | orchestrator | 2025-08-29 14:56:38 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:38.961650 | orchestrator | 2025-08-29 14:56:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:42.012615 | orchestrator | 2025-08-29 14:56:42 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:42.016300 | orchestrator | 2025-08-29 14:56:42 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:42.016422 | orchestrator | 2025-08-29 14:56:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:45.062526 | orchestrator | 2025-08-29 14:56:45 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:45.064914 | orchestrator | 2025-08-29 14:56:45 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:45.065413 | orchestrator | 2025-08-29 14:56:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:48.115036 | orchestrator | 2025-08-29 14:56:48 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:48.116025 | orchestrator | 2025-08-29 14:56:48 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:48.116104 | orchestrator | 2025-08-29 14:56:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:51.160484 | orchestrator | 2025-08-29 14:56:51 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:51.160572 | orchestrator | 2025-08-29 14:56:51 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:51.160582 | orchestrator | 2025-08-29 14:56:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:54.218512 | orchestrator | 2025-08-29 14:56:54 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:54.218619 | orchestrator | 2025-08-29 14:56:54 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:54.218634 | orchestrator | 2025-08-29 14:56:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:57.248825 | orchestrator | 2025-08-29 14:56:57 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:56:57.251198 | orchestrator | 2025-08-29 14:56:57 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:56:57.251699 | orchestrator | 2025-08-29 14:56:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:00.287788 | orchestrator | 2025-08-29 14:57:00 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:00.289682 | orchestrator | 2025-08-29 14:57:00 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:00.289993 | orchestrator | 2025-08-29 14:57:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:03.323044 | orchestrator | 2025-08-29 14:57:03 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:03.323305 | orchestrator | 2025-08-29 14:57:03 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:03.323367 | orchestrator | 2025-08-29 14:57:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:06.364206 | orchestrator | 2025-08-29 14:57:06 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:06.366336 | orchestrator | 2025-08-29 14:57:06 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:06.366398 | orchestrator | 2025-08-29 14:57:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:09.401108 | orchestrator | 2025-08-29 14:57:09 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:09.402934 | orchestrator | 2025-08-29 14:57:09 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:09.402980 | orchestrator | 2025-08-29 14:57:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:12.450647 | orchestrator | 2025-08-29 14:57:12 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:12.452849 | orchestrator | 2025-08-29 14:57:12 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:12.452904 | orchestrator | 2025-08-29 14:57:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:15.501295 | orchestrator | 2025-08-29 14:57:15 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:15.501426 | orchestrator | 2025-08-29 14:57:15 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:15.501443 | orchestrator | 2025-08-29 14:57:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:18.537501 | orchestrator | 2025-08-29 14:57:18 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:18.539180 | orchestrator | 2025-08-29 14:57:18 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:18.539251 | orchestrator | 2025-08-29 14:57:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:21.586549 | orchestrator | 2025-08-29 14:57:21 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:21.587376 | orchestrator | 2025-08-29 14:57:21 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:21.587405 | orchestrator | 2025-08-29 14:57:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:24.628443 | orchestrator | 2025-08-29 14:57:24 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:24.630474 | orchestrator | 2025-08-29 14:57:24 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:24.630581 | orchestrator | 2025-08-29 14:57:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:27.665939 | orchestrator | 2025-08-29 14:57:27 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:27.669235 | orchestrator | 2025-08-29 14:57:27 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:27.669558 | orchestrator | 2025-08-29 14:57:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:30.711435 | orchestrator | 2025-08-29 14:57:30 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:30.713084 | orchestrator | 2025-08-29 14:57:30 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:30.713615 | orchestrator | 2025-08-29 14:57:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:33.764992 | orchestrator | 2025-08-29 14:57:33 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:33.766341 | orchestrator | 2025-08-29 14:57:33 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:33.766403 | orchestrator | 2025-08-29 14:57:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:36.803121 | orchestrator | 2025-08-29 14:57:36 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:36.811364 | orchestrator | 2025-08-29 14:57:36 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:36.811457 | orchestrator | 2025-08-29 14:57:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:39.861277 | orchestrator | 2025-08-29 14:57:39 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state STARTED 2025-08-29 14:57:39.861480 | orchestrator | 2025-08-29 14:57:39 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:39.861527 | orchestrator | 2025-08-29 14:57:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:42.905152 | orchestrator | 2025-08-29 14:57:42 | INFO  | Task d7b53344-d901-4c1e-81d6-e4ef7aa340d4 is in state SUCCESS 2025-08-29 14:57:42.906317 | orchestrator | 2025-08-29 14:57:42.906366 | orchestrator | 2025-08-29 14:57:42.906376 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:57:42.906384 | orchestrator | 2025-08-29 14:57:42.906392 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:57:42.906399 | orchestrator | Friday 29 August 2025 14:50:39 +0000 (0:00:00.314) 0:00:00.314 ********* 2025-08-29 14:57:42.906406 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.906415 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.906423 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.906430 | orchestrator | 2025-08-29 14:57:42.906437 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:57:42.906443 | orchestrator | Friday 29 August 2025 14:50:40 +0000 (0:00:00.375) 0:00:00.689 ********* 2025-08-29 14:57:42.906451 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-08-29 14:57:42.906458 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-08-29 14:57:42.906465 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-08-29 14:57:42.906473 | orchestrator | 2025-08-29 14:57:42.906480 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-08-29 14:57:42.906486 | orchestrator | 2025-08-29 14:57:42.906493 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 14:57:42.906499 | orchestrator | Friday 29 August 2025 14:50:40 +0000 (0:00:00.687) 0:00:01.377 ********* 2025-08-29 14:57:42.906507 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.906513 | orchestrator | 2025-08-29 14:57:42.906519 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-08-29 14:57:42.906526 | orchestrator | Friday 29 August 2025 14:50:41 +0000 (0:00:00.861) 0:00:02.238 ********* 2025-08-29 14:57:42.906532 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.906539 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.906545 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.906551 | orchestrator | 2025-08-29 14:57:42.906557 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 14:57:42.906564 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.802) 0:00:03.041 ********* 2025-08-29 14:57:42.906570 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.906576 | orchestrator | 2025-08-29 14:57:42.906583 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-08-29 14:57:42.906589 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:00.714) 0:00:03.755 ********* 2025-08-29 14:57:42.906596 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.906603 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.906609 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.906616 | orchestrator | 2025-08-29 14:57:42.906622 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-08-29 14:57:42.906629 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:00.683) 0:00:04.438 ********* 2025-08-29 14:57:42.906636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:57:42.906644 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:57:42.906650 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:57:42.906658 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:57:42.906665 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:57:42.906694 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:57:42.906701 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 14:57:42.906710 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 14:57:42.906716 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 14:57:42.906723 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 14:57:42.906730 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 14:57:42.906734 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 14:57:42.906738 | orchestrator | 2025-08-29 14:57:42.906743 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 14:57:42.906747 | orchestrator | Friday 29 August 2025 14:50:47 +0000 (0:00:03.353) 0:00:07.792 ********* 2025-08-29 14:57:42.906751 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 14:57:42.906756 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 14:57:42.906760 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 14:57:42.906807 | orchestrator | 2025-08-29 14:57:42.906812 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 14:57:42.906817 | orchestrator | Friday 29 August 2025 14:50:48 +0000 (0:00:01.166) 0:00:08.958 ********* 2025-08-29 14:57:42.906821 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 14:57:42.906826 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 14:57:42.906830 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 14:57:42.906835 | orchestrator | 2025-08-29 14:57:42.906839 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 14:57:42.906843 | orchestrator | Friday 29 August 2025 14:50:50 +0000 (0:00:02.072) 0:00:11.031 ********* 2025-08-29 14:57:42.906848 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-08-29 14:57:42.906852 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.906870 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-08-29 14:57:42.906874 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.906878 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-08-29 14:57:42.906883 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.906887 | orchestrator | 2025-08-29 14:57:42.906891 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-08-29 14:57:42.906895 | orchestrator | Friday 29 August 2025 14:50:51 +0000 (0:00:00.559) 0:00:11.590 ********* 2025-08-29 14:57:42.906902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.906912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:57:42.907055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:57:42.907059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:57:42.907068 | orchestrator | 2025-08-29 14:57:42.907073 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-08-29 14:57:42.907077 | orchestrator | Friday 29 August 2025 14:50:54 +0000 (0:00:03.105) 0:00:14.696 ********* 2025-08-29 14:57:42.907081 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.907085 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.907089 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.907093 | orchestrator | 2025-08-29 14:57:42.907098 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-08-29 14:57:42.907102 | orchestrator | Friday 29 August 2025 14:50:55 +0000 (0:00:01.175) 0:00:15.872 ********* 2025-08-29 14:57:42.907106 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-08-29 14:57:42.907110 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-08-29 14:57:42.907114 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-08-29 14:57:42.907118 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-08-29 14:57:42.907122 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-08-29 14:57:42.907126 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-08-29 14:57:42.907131 | orchestrator | 2025-08-29 14:57:42.907135 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-08-29 14:57:42.907139 | orchestrator | Friday 29 August 2025 14:50:58 +0000 (0:00:02.752) 0:00:18.625 ********* 2025-08-29 14:57:42.907143 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.907147 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.907151 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.907155 | orchestrator | 2025-08-29 14:57:42.907160 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-08-29 14:57:42.907257 | orchestrator | Friday 29 August 2025 14:50:59 +0000 (0:00:01.276) 0:00:19.902 ********* 2025-08-29 14:57:42.907265 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.907272 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.907279 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.907285 | orchestrator | 2025-08-29 14:57:42.907291 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-08-29 14:57:42.907297 | orchestrator | Friday 29 August 2025 14:51:02 +0000 (0:00:03.521) 0:00:23.423 ********* 2025-08-29 14:57:42.907308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.907323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.907330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.907346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8013bca988dc373eeefc6bfbca8b519813d61f93', '__omit_place_holder__8013bca988dc373eeefc6bfbca8b519813d61f93'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:57:42.907352 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.907359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.907366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.907372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.907383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8013bca988dc373eeefc6bfbca8b519813d61f93', '__omit_place_holder__8013bca988dc373eeefc6bfbca8b519813d61f93'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:57:42.907390 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.907404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.907416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.907423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.907430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8013bca988dc373eeefc6bfbca8b519813d61f93', '__omit_place_holder__8013bca988dc373eeefc6bfbca8b519813d61f93'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:57:42.907437 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.907444 | orchestrator | 2025-08-29 14:57:42.907451 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-08-29 14:57:42.907458 | orchestrator | Friday 29 August 2025 14:51:04 +0000 (0:00:01.299) 0:00:24.722 ********* 2025-08-29 14:57:42.907465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.907526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8013bca988dc373eeefc6bfbca8b519813d61f93', '__omit_place_holder__8013bca988dc373eeefc6bfbca8b519813d61f93'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:57:42.907534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.907572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.907580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8013bca988dc373eeefc6bfbca8b519813d61f93', '__omit_place_holder__8013bca988dc373eeefc6bfbca8b519813d61f93'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:57:42.907587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8013bca988dc373eeefc6bfbca8b519813d61f93', '__omit_place_holder__8013bca988dc373eeefc6bfbca8b519813d61f93'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:57:42.907594 | orchestrator | 2025-08-29 14:57:42.907601 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-08-29 14:57:42.907608 | orchestrator | Friday 29 August 2025 14:51:08 +0000 (0:00:04.565) 0:00:29.288 ********* 2025-08-29 14:57:42.907615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.907704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:57:42.907708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:57:42.907713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:57:42.907717 | orchestrator | 2025-08-29 14:57:42.907721 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-08-29 14:57:42.907728 | orchestrator | Friday 29 August 2025 14:51:12 +0000 (0:00:03.563) 0:00:32.851 ********* 2025-08-29 14:57:42.907737 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 14:57:42.907746 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 14:57:42.907750 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 14:57:42.907754 | orchestrator | 2025-08-29 14:57:42.907758 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-08-29 14:57:42.907762 | orchestrator | Friday 29 August 2025 14:51:14 +0000 (0:00:02.453) 0:00:35.305 ********* 2025-08-29 14:57:42.907766 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 14:57:42.907770 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 14:57:42.907774 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 14:57:42.907779 | orchestrator | 2025-08-29 14:57:42.908574 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-08-29 14:57:42.908604 | orchestrator | Friday 29 August 2025 14:51:22 +0000 (0:00:07.214) 0:00:42.520 ********* 2025-08-29 14:57:42.908609 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.908613 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.908617 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.908621 | orchestrator | 2025-08-29 14:57:42.908626 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-08-29 14:57:42.908630 | orchestrator | Friday 29 August 2025 14:51:22 +0000 (0:00:00.898) 0:00:43.418 ********* 2025-08-29 14:57:42.908634 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 14:57:42.908639 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 14:57:42.908643 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 14:57:42.908647 | orchestrator | 2025-08-29 14:57:42.908651 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-08-29 14:57:42.908656 | orchestrator | Friday 29 August 2025 14:51:26 +0000 (0:00:03.133) 0:00:46.552 ********* 2025-08-29 14:57:42.908660 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 14:57:42.908664 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 14:57:42.908669 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 14:57:42.908673 | orchestrator | 2025-08-29 14:57:42.908677 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-08-29 14:57:42.908681 | orchestrator | Friday 29 August 2025 14:51:28 +0000 (0:00:02.446) 0:00:48.998 ********* 2025-08-29 14:57:42.908685 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-08-29 14:57:42.908832 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-08-29 14:57:42.908839 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-08-29 14:57:42.908845 | orchestrator | 2025-08-29 14:57:42.908852 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-08-29 14:57:42.908858 | orchestrator | Friday 29 August 2025 14:51:30 +0000 (0:00:02.284) 0:00:51.282 ********* 2025-08-29 14:57:42.908865 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-08-29 14:57:42.908873 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-08-29 14:57:42.908877 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-08-29 14:57:42.908889 | orchestrator | 2025-08-29 14:57:42.908894 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 14:57:42.908898 | orchestrator | Friday 29 August 2025 14:51:32 +0000 (0:00:02.026) 0:00:53.308 ********* 2025-08-29 14:57:42.908902 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.908906 | orchestrator | 2025-08-29 14:57:42.908910 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-08-29 14:57:42.908914 | orchestrator | Friday 29 August 2025 14:51:33 +0000 (0:00:00.669) 0:00:53.978 ********* 2025-08-29 14:57:42.908937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.908948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.908961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.908966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.908971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.908976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.908984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:57:42.908989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:57:42.908996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:57:42.909000 | orchestrator | 2025-08-29 14:57:42.909004 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-08-29 14:57:42.909009 | orchestrator | Friday 29 August 2025 14:51:37 +0000 (0:00:04.367) 0:00:58.345 ********* 2025-08-29 14:57:42.909035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909052 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.909057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909072 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.909077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909093 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.909100 | orchestrator | 2025-08-29 14:57:42.909104 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-08-29 14:57:42.909108 | orchestrator | Friday 29 August 2025 14:51:39 +0000 (0:00:01.253) 0:00:59.599 ********* 2025-08-29 14:57:42.909113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909125 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.909133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909150 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.909158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909171 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.909175 | orchestrator | 2025-08-29 14:57:42.909179 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 14:57:42.909183 | orchestrator | Friday 29 August 2025 14:51:40 +0000 (0:00:01.418) 0:01:01.017 ********* 2025-08-29 14:57:42.909190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909207 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.909213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909226 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.909230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909249 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.909253 | orchestrator | 2025-08-29 14:57:42.909257 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 14:57:42.909262 | orchestrator | Friday 29 August 2025 14:51:41 +0000 (0:00:01.321) 0:01:02.339 ********* 2025-08-29 14:57:42.909269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909282 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.909286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909301 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.909308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909327 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.909331 | orchestrator | 2025-08-29 14:57:42.909335 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 14:57:42.909340 | orchestrator | Friday 29 August 2025 14:51:42 +0000 (0:00:00.970) 0:01:03.309 ********* 2025-08-29 14:57:42.909347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909371 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.909381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909407 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.909414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909435 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.909441 | orchestrator | 2025-08-29 14:57:42.909448 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-08-29 14:57:42.909453 | orchestrator | Friday 29 August 2025 14:51:43 +0000 (0:00:00.980) 0:01:04.290 ********* 2025-08-29 14:57:42.909460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909491 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.909498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909519 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.909530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909618 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.909625 | orchestrator | 2025-08-29 14:57:42.909633 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-08-29 14:57:42.909639 | orchestrator | Friday 29 August 2025 14:51:46 +0000 (0:00:02.810) 0:01:07.101 ********* 2025-08-29 14:57:42.909644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909733 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.909740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909777 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.909784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909807 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.909814 | orchestrator | 2025-08-29 14:57:42.909821 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-08-29 14:57:42.909828 | orchestrator | Friday 29 August 2025 14:51:47 +0000 (0:00:01.216) 0:01:08.317 ********* 2025-08-29 14:57:42.909854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909885 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.909896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909917 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.909925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:57:42.909948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:57:42.909959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:57:42.909966 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.909973 | orchestrator | 2025-08-29 14:57:42.909980 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-08-29 14:57:42.909987 | orchestrator | Friday 29 August 2025 14:51:48 +0000 (0:00:01.062) 0:01:09.380 ********* 2025-08-29 14:57:42.909994 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 14:57:42.910001 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 14:57:42.910096 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 14:57:42.910110 | orchestrator | 2025-08-29 14:57:42.910117 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-08-29 14:57:42.910124 | orchestrator | Friday 29 August 2025 14:51:51 +0000 (0:00:02.505) 0:01:11.885 ********* 2025-08-29 14:57:42.910131 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 14:57:42.910138 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 14:57:42.910145 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 14:57:42.910152 | orchestrator | 2025-08-29 14:57:42.910159 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-08-29 14:57:42.910166 | orchestrator | Friday 29 August 2025 14:51:53 +0000 (0:00:01.738) 0:01:13.624 ********* 2025-08-29 14:57:42.910173 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 14:57:42.910180 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 14:57:42.910187 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 14:57:42.910192 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 14:57:42.910196 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.910200 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 14:57:42.910204 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.910208 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 14:57:42.910212 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.910216 | orchestrator | 2025-08-29 14:57:42.910220 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-08-29 14:57:42.910230 | orchestrator | Friday 29 August 2025 14:51:54 +0000 (0:00:01.112) 0:01:14.736 ********* 2025-08-29 14:57:42.910234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.910239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.910246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:57:42.910255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.910260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.910264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:57:42.910274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:57:42.910279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:57:42.910283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:57:42.910287 | orchestrator | 2025-08-29 14:57:42.910291 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-08-29 14:57:42.910295 | orchestrator | Friday 29 August 2025 14:51:57 +0000 (0:00:03.184) 0:01:17.921 ********* 2025-08-29 14:57:42.910299 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.910304 | orchestrator | 2025-08-29 14:57:42.910308 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-08-29 14:57:42.910314 | orchestrator | Friday 29 August 2025 14:51:58 +0000 (0:00:01.389) 0:01:19.310 ********* 2025-08-29 14:57:42.910319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 14:57:42.910328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.910333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.910343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.910347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 14:57:42.910352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.910358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.910746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 14:57:42.910810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.910835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.910842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.910848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.910855 | orchestrator | 2025-08-29 14:57:42.910862 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-08-29 14:57:42.910868 | orchestrator | Friday 29 August 2025 14:52:05 +0000 (0:00:06.719) 0:01:26.030 ********* 2025-08-29 14:57:42.910886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 14:57:42.910904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.910911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.910923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.910929 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.910935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 14:57:42.910941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.910950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.910956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.910962 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.910973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 14:57:42.910984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.910990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.910996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.911002 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.911008 | orchestrator | 2025-08-29 14:57:42.911056 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-08-29 14:57:42.911063 | orchestrator | Friday 29 August 2025 14:52:06 +0000 (0:00:01.182) 0:01:27.212 ********* 2025-08-29 14:57:42.911069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:57:42.911078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:57:42.911084 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.911093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:57:42.911099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:57:42.911105 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.911110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:57:42.911116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:57:42.911127 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.911133 | orchestrator | 2025-08-29 14:57:42.911143 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-08-29 14:57:42.911150 | orchestrator | Friday 29 August 2025 14:52:08 +0000 (0:00:01.592) 0:01:28.805 ********* 2025-08-29 14:57:42.911155 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.911161 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.911167 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.911173 | orchestrator | 2025-08-29 14:57:42.911178 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-08-29 14:57:42.911184 | orchestrator | Friday 29 August 2025 14:52:09 +0000 (0:00:01.581) 0:01:30.386 ********* 2025-08-29 14:57:42.911190 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.911195 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.911201 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.911207 | orchestrator | 2025-08-29 14:57:42.911212 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-08-29 14:57:42.911218 | orchestrator | Friday 29 August 2025 14:52:12 +0000 (0:00:02.259) 0:01:32.646 ********* 2025-08-29 14:57:42.911224 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.911229 | orchestrator | 2025-08-29 14:57:42.911235 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-08-29 14:57:42.911240 | orchestrator | Friday 29 August 2025 14:52:13 +0000 (0:00:01.226) 0:01:33.873 ********* 2025-08-29 14:57:42.911247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.911255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.911263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.911277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.911301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.911317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.911328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.911338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.911352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.911369 | orchestrator | 2025-08-29 14:57:42.911379 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-08-29 14:57:42.911388 | orchestrator | Friday 29 August 2025 14:52:18 +0000 (0:00:04.978) 0:01:38.852 ********* 2025-08-29 14:57:42.911404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.911414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.911423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.911434 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.911444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.911454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.911476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.911487 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.911503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.911510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.911516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.911522 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.911529 | orchestrator | 2025-08-29 14:57:42.911536 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-08-29 14:57:42.911545 | orchestrator | Friday 29 August 2025 14:52:20 +0000 (0:00:01.807) 0:01:40.659 ********* 2025-08-29 14:57:42.911554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:57:42.911563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:57:42.911581 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.911590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:57:42.911598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:57:42.911606 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.911626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:57:42.911636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:57:42.911646 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.911654 | orchestrator | 2025-08-29 14:57:42.911660 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-08-29 14:57:42.911666 | orchestrator | Friday 29 August 2025 14:52:21 +0000 (0:00:01.311) 0:01:41.971 ********* 2025-08-29 14:57:42.911672 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.911677 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.911683 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.911689 | orchestrator | 2025-08-29 14:57:42.911695 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-08-29 14:57:42.911701 | orchestrator | Friday 29 August 2025 14:52:22 +0000 (0:00:01.452) 0:01:43.423 ********* 2025-08-29 14:57:42.911706 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.911712 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.911717 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.911723 | orchestrator | 2025-08-29 14:57:42.911734 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-08-29 14:57:42.911743 | orchestrator | Friday 29 August 2025 14:52:25 +0000 (0:00:02.133) 0:01:45.556 ********* 2025-08-29 14:57:42.911753 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.911762 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.911772 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.911783 | orchestrator | 2025-08-29 14:57:42.911793 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-08-29 14:57:42.911802 | orchestrator | Friday 29 August 2025 14:52:25 +0000 (0:00:00.365) 0:01:45.922 ********* 2025-08-29 14:57:42.911812 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.911822 | orchestrator | 2025-08-29 14:57:42.911833 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-08-29 14:57:42.911843 | orchestrator | Friday 29 August 2025 14:52:26 +0000 (0:00:00.643) 0:01:46.565 ********* 2025-08-29 14:57:42.911852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 14:57:42.911869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 14:57:42.911877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 14:57:42.911883 | orchestrator | 2025-08-29 14:57:42.911894 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-08-29 14:57:42.911900 | orchestrator | Friday 29 August 2025 14:52:28 +0000 (0:00:02.727) 0:01:49.293 ********* 2025-08-29 14:57:42.911912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 14:57:42.911920 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.911927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 14:57:42.911934 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.911940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 14:57:42.911953 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.911959 | orchestrator | 2025-08-29 14:57:42.911965 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-08-29 14:57:42.911971 | orchestrator | Friday 29 August 2025 14:52:31 +0000 (0:00:02.273) 0:01:51.566 ********* 2025-08-29 14:57:42.911979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:57:42.911988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:57:42.911997 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.912033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:57:42.912050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:57:42.912061 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.912079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:57:42.912092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:57:42.912098 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.912104 | orchestrator | 2025-08-29 14:57:42.912110 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-08-29 14:57:42.912116 | orchestrator | Friday 29 August 2025 14:52:33 +0000 (0:00:02.618) 0:01:54.185 ********* 2025-08-29 14:57:42.912131 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.912141 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.912151 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.912160 | orchestrator | 2025-08-29 14:57:42.912169 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-08-29 14:57:42.912179 | orchestrator | Friday 29 August 2025 14:52:34 +0000 (0:00:00.922) 0:01:55.108 ********* 2025-08-29 14:57:42.912188 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.912198 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.912207 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.912217 | orchestrator | 2025-08-29 14:57:42.912224 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-08-29 14:57:42.912229 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:01.488) 0:01:56.596 ********* 2025-08-29 14:57:42.912235 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.912241 | orchestrator | 2025-08-29 14:57:42.912247 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-08-29 14:57:42.912253 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:00.887) 0:01:57.484 ********* 2025-08-29 14:57:42.912260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.912267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.912314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.912351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912370 | orchestrator | 2025-08-29 14:57:42.912376 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-08-29 14:57:42.912383 | orchestrator | Friday 29 August 2025 14:52:42 +0000 (0:00:05.427) 0:02:02.911 ********* 2025-08-29 14:57:42.912392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.912398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912424 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.912430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.912436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912465 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.912471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.912477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912503 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.912509 | orchestrator | 2025-08-29 14:57:42.912515 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-08-29 14:57:42.912520 | orchestrator | Friday 29 August 2025 14:52:43 +0000 (0:00:01.452) 0:02:04.364 ********* 2025-08-29 14:57:42.912526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:57:42.912535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:57:42.912541 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.912547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:57:42.912552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:57:42.912558 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.912564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:57:42.912569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:57:42.912575 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.912580 | orchestrator | 2025-08-29 14:57:42.912586 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-08-29 14:57:42.912591 | orchestrator | Friday 29 August 2025 14:52:45 +0000 (0:00:01.344) 0:02:05.709 ********* 2025-08-29 14:57:42.912597 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.912602 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.912608 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.912613 | orchestrator | 2025-08-29 14:57:42.912619 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-08-29 14:57:42.912625 | orchestrator | Friday 29 August 2025 14:52:46 +0000 (0:00:01.585) 0:02:07.294 ********* 2025-08-29 14:57:42.912630 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.912635 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.912641 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.912647 | orchestrator | 2025-08-29 14:57:42.912652 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-08-29 14:57:42.912658 | orchestrator | Friday 29 August 2025 14:52:49 +0000 (0:00:02.619) 0:02:09.913 ********* 2025-08-29 14:57:42.912663 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.912669 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.912674 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.912679 | orchestrator | 2025-08-29 14:57:42.912685 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-08-29 14:57:42.912690 | orchestrator | Friday 29 August 2025 14:52:50 +0000 (0:00:00.656) 0:02:10.570 ********* 2025-08-29 14:57:42.912696 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.912701 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.912707 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.912713 | orchestrator | 2025-08-29 14:57:42.912718 | orchestrator | TASK [include_role : designate] ************************************************ 2025-08-29 14:57:42.912724 | orchestrator | Friday 29 August 2025 14:52:50 +0000 (0:00:00.368) 0:02:10.938 ********* 2025-08-29 14:57:42.912734 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.912739 | orchestrator | 2025-08-29 14:57:42.912745 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-08-29 14:57:42.912750 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:00.816) 0:02:11.755 ********* 2025-08-29 14:57:42.912759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 14:57:42.912769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:57:42.912776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 14:57:42.912804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.912810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:57:42.913283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 14:57:42.913350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:57:42.913356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913393 | orchestrator | 2025-08-29 14:57:42.913399 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-08-29 14:57:42.913405 | orchestrator | Friday 29 August 2025 14:52:55 +0000 (0:00:04.684) 0:02:16.440 ********* 2025-08-29 14:57:42.913430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 14:57:42.913441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:57:42.913450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913526 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.913536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 14:57:42.913546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:57:42.913563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 14:57:42.913606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:57:42.913612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913644 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.913650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.913690 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.913696 | orchestrator | 2025-08-29 14:57:42.913702 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-08-29 14:57:42.913708 | orchestrator | Friday 29 August 2025 14:52:57 +0000 (0:00:01.094) 0:02:17.534 ********* 2025-08-29 14:57:42.913714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:57:42.913720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:57:42.913727 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.913733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:57:42.913739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:57:42.913745 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.913751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:57:42.913756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:57:42.913762 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.913768 | orchestrator | 2025-08-29 14:57:42.913773 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-08-29 14:57:42.913779 | orchestrator | Friday 29 August 2025 14:52:58 +0000 (0:00:01.163) 0:02:18.697 ********* 2025-08-29 14:57:42.913785 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.913790 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.913796 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.913802 | orchestrator | 2025-08-29 14:57:42.913808 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-08-29 14:57:42.913813 | orchestrator | Friday 29 August 2025 14:52:59 +0000 (0:00:01.458) 0:02:20.155 ********* 2025-08-29 14:57:42.913819 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.913825 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.913830 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.913836 | orchestrator | 2025-08-29 14:57:42.913842 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-08-29 14:57:42.913853 | orchestrator | Friday 29 August 2025 14:53:02 +0000 (0:00:02.428) 0:02:22.584 ********* 2025-08-29 14:57:42.913859 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.913866 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.913872 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.913878 | orchestrator | 2025-08-29 14:57:42.913885 | orchestrator | TASK [include_role : glance] *************************************************** 2025-08-29 14:57:42.913891 | orchestrator | Friday 29 August 2025 14:53:02 +0000 (0:00:00.706) 0:02:23.291 ********* 2025-08-29 14:57:42.913897 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.913904 | orchestrator | 2025-08-29 14:57:42.913910 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-08-29 14:57:42.913916 | orchestrator | Friday 29 August 2025 14:53:03 +0000 (0:00:00.903) 0:02:24.194 ********* 2025-08-29 14:57:42.913940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 14:57:42.913957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:57:42.913979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 14:57:42.913992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:57:42.914086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 14:57:42.914106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:57:42.914113 | orchestrator | 2025-08-29 14:57:42.914120 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-08-29 14:57:42.914126 | orchestrator | Friday 29 August 2025 14:53:09 +0000 (0:00:05.311) 0:02:29.505 ********* 2025-08-29 14:57:42.914152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 14:57:42.914172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:57:42.914182 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.914195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 14:57:42.914230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:57:42.914240 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.914253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 14:57:42.914282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:57:42.914299 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.914307 | orchestrator | 2025-08-29 14:57:42.914315 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-08-29 14:57:42.914324 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:04.536) 0:02:34.042 ********* 2025-08-29 14:57:42.914333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:57:42.914343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:57:42.914352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:57:42.914361 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.914374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:57:42.914390 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.914399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:57:42.914414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:57:42.914424 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.914432 | orchestrator | 2025-08-29 14:57:42.914441 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-08-29 14:57:42.914449 | orchestrator | Friday 29 August 2025 14:53:19 +0000 (0:00:06.183) 0:02:40.225 ********* 2025-08-29 14:57:42.914457 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.914467 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.914475 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.914484 | orchestrator | 2025-08-29 14:57:42.914492 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-08-29 14:57:42.914500 | orchestrator | Friday 29 August 2025 14:53:21 +0000 (0:00:01.546) 0:02:41.772 ********* 2025-08-29 14:57:42.914508 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.914517 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.914525 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.914533 | orchestrator | 2025-08-29 14:57:42.914541 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-08-29 14:57:42.914550 | orchestrator | Friday 29 August 2025 14:53:23 +0000 (0:00:02.391) 0:02:44.164 ********* 2025-08-29 14:57:42.914558 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.914567 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.914575 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.914583 | orchestrator | 2025-08-29 14:57:42.914591 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-08-29 14:57:42.914599 | orchestrator | Friday 29 August 2025 14:53:24 +0000 (0:00:00.671) 0:02:44.835 ********* 2025-08-29 14:57:42.914608 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.914617 | orchestrator | 2025-08-29 14:57:42.914625 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-08-29 14:57:42.914633 | orchestrator | Friday 29 August 2025 14:53:25 +0000 (0:00:00.972) 0:02:45.807 ********* 2025-08-29 14:57:42.914643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 14:57:42.914664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 14:57:42.914679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 14:57:42.914687 | orchestrator | 2025-08-29 14:57:42.914696 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-08-29 14:57:42.914704 | orchestrator | Friday 29 August 2025 14:53:29 +0000 (0:00:03.969) 0:02:49.777 ********* 2025-08-29 14:57:42.914721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 14:57:42.914730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 14:57:42.914739 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.914747 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.914755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 14:57:42.914764 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.914773 | orchestrator | 2025-08-29 14:57:42.914782 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-08-29 14:57:42.914798 | orchestrator | Friday 29 August 2025 14:53:30 +0000 (0:00:00.812) 0:02:50.590 ********* 2025-08-29 14:57:42.914808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:57:42.914818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:57:42.914828 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.914837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:57:42.914847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:57:42.914856 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.914866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:57:42.914881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:57:42.914891 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.914901 | orchestrator | 2025-08-29 14:57:42.914911 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-08-29 14:57:42.914920 | orchestrator | Friday 29 August 2025 14:53:30 +0000 (0:00:00.755) 0:02:51.346 ********* 2025-08-29 14:57:42.914930 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.914939 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.914949 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.914959 | orchestrator | 2025-08-29 14:57:42.914969 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-08-29 14:57:42.914979 | orchestrator | Friday 29 August 2025 14:53:32 +0000 (0:00:01.406) 0:02:52.753 ********* 2025-08-29 14:57:42.914988 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.914998 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.915006 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.915073 | orchestrator | 2025-08-29 14:57:42.915085 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-08-29 14:57:42.915096 | orchestrator | Friday 29 August 2025 14:53:34 +0000 (0:00:02.362) 0:02:55.115 ********* 2025-08-29 14:57:42.915105 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.915114 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.915134 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.915143 | orchestrator | 2025-08-29 14:57:42.915154 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-08-29 14:57:42.915164 | orchestrator | Friday 29 August 2025 14:53:35 +0000 (0:00:00.765) 0:02:55.880 ********* 2025-08-29 14:57:42.915174 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.915184 | orchestrator | 2025-08-29 14:57:42.915194 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-08-29 14:57:42.915203 | orchestrator | Friday 29 August 2025 14:53:36 +0000 (0:00:01.054) 0:02:56.934 ********* 2025-08-29 14:57:42.915215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 14:57:42.915249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 14:57:42.915261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 14:57:42.915278 | orchestrator | 2025-08-29 14:57:42.915288 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-08-29 14:57:42.915298 | orchestrator | Friday 29 August 2025 14:53:40 +0000 (0:00:04.226) 0:03:01.160 ********* 2025-08-29 14:57:42.915367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 14:57:42.915386 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.915402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 14:57:42.915414 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.915447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 14:57:42.915465 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.915475 | orchestrator | 2025-08-29 14:57:42.915484 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-08-29 14:57:42.915494 | orchestrator | Friday 29 August 2025 14:53:42 +0000 (0:00:01.492) 0:03:02.653 ********* 2025-08-29 14:57:42.915505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:57:42.915517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:57:42.915526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:57:42.915537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:57:42.915549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 14:57:42.915557 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.915571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:57:42.915581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:57:42.915591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:57:42.915623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:57:42.915642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 14:57:42.915651 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.915661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:57:42.915671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:57:42.915680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:57:42.915690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:57:42.915700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 14:57:42.915709 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.915719 | orchestrator | 2025-08-29 14:57:42.915729 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-08-29 14:57:42.915738 | orchestrator | Friday 29 August 2025 14:53:43 +0000 (0:00:01.317) 0:03:03.971 ********* 2025-08-29 14:57:42.915749 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.915758 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.915767 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.915776 | orchestrator | 2025-08-29 14:57:42.915786 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-08-29 14:57:42.915795 | orchestrator | Friday 29 August 2025 14:53:44 +0000 (0:00:01.352) 0:03:05.324 ********* 2025-08-29 14:57:42.915805 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.915815 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.915824 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.915834 | orchestrator | 2025-08-29 14:57:42.915844 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-08-29 14:57:42.915853 | orchestrator | Friday 29 August 2025 14:53:47 +0000 (0:00:02.219) 0:03:07.543 ********* 2025-08-29 14:57:42.915863 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.915874 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.915883 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.915893 | orchestrator | 2025-08-29 14:57:42.915901 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-08-29 14:57:42.915911 | orchestrator | Friday 29 August 2025 14:53:47 +0000 (0:00:00.388) 0:03:07.931 ********* 2025-08-29 14:57:42.915921 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.915931 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.915941 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.915951 | orchestrator | 2025-08-29 14:57:42.915965 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-08-29 14:57:42.915985 | orchestrator | Friday 29 August 2025 14:53:48 +0000 (0:00:00.621) 0:03:08.553 ********* 2025-08-29 14:57:42.915994 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.916004 | orchestrator | 2025-08-29 14:57:42.916035 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-08-29 14:57:42.916044 | orchestrator | Friday 29 August 2025 14:53:49 +0000 (0:00:01.141) 0:03:09.695 ********* 2025-08-29 14:57:42.916176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 14:57:42.916194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:57:42.916205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:57:42.916215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 14:57:42.916236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:57:42.916259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:57:42.916296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 14:57:42.916308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:57:42.916319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:57:42.916329 | orchestrator | 2025-08-29 14:57:42.916338 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-08-29 14:57:42.916348 | orchestrator | Friday 29 August 2025 14:53:53 +0000 (0:00:04.262) 0:03:13.957 ********* 2025-08-29 14:57:42.916364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 14:57:42.916382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:57:42.916399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 14:57:42.916410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:57:42.916420 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.916428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:57:42.916436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:57:42.916453 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.916467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 14:57:42.916486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:57:42.916497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:57:42.916507 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.916516 | orchestrator | 2025-08-29 14:57:42.916525 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-08-29 14:57:42.916536 | orchestrator | Friday 29 August 2025 14:53:54 +0000 (0:00:01.108) 0:03:15.066 ********* 2025-08-29 14:57:42.916546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:57:42.916558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:57:42.916568 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.916578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:57:42.916588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:57:42.916605 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.916615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:57:42.916625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:57:42.916634 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.916645 | orchestrator | 2025-08-29 14:57:42.916654 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-08-29 14:57:42.916664 | orchestrator | Friday 29 August 2025 14:53:55 +0000 (0:00:01.244) 0:03:16.310 ********* 2025-08-29 14:57:42.916673 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.916682 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.916691 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.916700 | orchestrator | 2025-08-29 14:57:42.916710 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-08-29 14:57:42.916720 | orchestrator | Friday 29 August 2025 14:53:57 +0000 (0:00:01.498) 0:03:17.809 ********* 2025-08-29 14:57:42.916730 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.916740 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.916749 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.916759 | orchestrator | 2025-08-29 14:57:42.916770 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-08-29 14:57:42.916779 | orchestrator | Friday 29 August 2025 14:53:59 +0000 (0:00:02.254) 0:03:20.063 ********* 2025-08-29 14:57:42.916788 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.916797 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.916806 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.916815 | orchestrator | 2025-08-29 14:57:42.916825 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-08-29 14:57:42.916834 | orchestrator | Friday 29 August 2025 14:54:00 +0000 (0:00:00.727) 0:03:20.790 ********* 2025-08-29 14:57:42.916844 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.916855 | orchestrator | 2025-08-29 14:57:42.916865 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-08-29 14:57:42.916875 | orchestrator | Friday 29 August 2025 14:54:01 +0000 (0:00:01.104) 0:03:21.895 ********* 2025-08-29 14:57:42.916913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 14:57:42.916926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.916946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 14:57:42.916958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.916973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 14:57:42.916989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.916999 | orchestrator | 2025-08-29 14:57:42.917010 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-08-29 14:57:42.917041 | orchestrator | Friday 29 August 2025 14:54:05 +0000 (0:00:03.987) 0:03:25.882 ********* 2025-08-29 14:57:42.917052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 14:57:42.917069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 14:57:42.917083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917105 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.917113 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.917123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 14:57:42.917136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917145 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.917154 | orchestrator | 2025-08-29 14:57:42.917163 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-08-29 14:57:42.917174 | orchestrator | Friday 29 August 2025 14:54:06 +0000 (0:00:01.215) 0:03:27.097 ********* 2025-08-29 14:57:42.917185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:57:42.917196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:57:42.917206 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.917215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:57:42.917225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:57:42.917234 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.917243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:57:42.917259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:57:42.917270 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.917279 | orchestrator | 2025-08-29 14:57:42.917289 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-08-29 14:57:42.917299 | orchestrator | Friday 29 August 2025 14:54:07 +0000 (0:00:00.994) 0:03:28.091 ********* 2025-08-29 14:57:42.917307 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.917317 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.917328 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.917338 | orchestrator | 2025-08-29 14:57:42.917347 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-08-29 14:57:42.917356 | orchestrator | Friday 29 August 2025 14:54:08 +0000 (0:00:01.304) 0:03:29.396 ********* 2025-08-29 14:57:42.917366 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.917375 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.917385 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.917394 | orchestrator | 2025-08-29 14:57:42.917403 | orchestrator | TASK [include_role : manila] *************************************************** 2025-08-29 14:57:42.917412 | orchestrator | Friday 29 August 2025 14:54:11 +0000 (0:00:02.374) 0:03:31.770 ********* 2025-08-29 14:57:42.917465 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.917483 | orchestrator | 2025-08-29 14:57:42.917493 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-08-29 14:57:42.917503 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:01.391) 0:03:33.162 ********* 2025-08-29 14:57:42.917514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 14:57:42.917523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 14:57:42.917579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 14:57:42.917628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917677 | orchestrator | 2025-08-29 14:57:42.917687 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-08-29 14:57:42.917697 | orchestrator | Friday 29 August 2025 14:54:17 +0000 (0:00:04.782) 0:03:37.944 ********* 2025-08-29 14:57:42.917707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 14:57:42.917717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917748 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.917763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 14:57:42.917785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917816 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.917826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 14:57:42.917836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.917888 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.917898 | orchestrator | 2025-08-29 14:57:42.917908 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-08-29 14:57:42.917917 | orchestrator | Friday 29 August 2025 14:54:18 +0000 (0:00:00.852) 0:03:38.797 ********* 2025-08-29 14:57:42.917927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:57:42.917937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:57:42.917947 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.917957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:57:42.917967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:57:42.917976 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.917986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:57:42.917995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:57:42.918004 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.918075 | orchestrator | 2025-08-29 14:57:42.918092 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-08-29 14:57:42.918102 | orchestrator | Friday 29 August 2025 14:54:20 +0000 (0:00:01.887) 0:03:40.684 ********* 2025-08-29 14:57:42.918112 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.918122 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.918131 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.918141 | orchestrator | 2025-08-29 14:57:42.918152 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-08-29 14:57:42.918162 | orchestrator | Friday 29 August 2025 14:54:21 +0000 (0:00:01.388) 0:03:42.073 ********* 2025-08-29 14:57:42.918172 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.918181 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.918190 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.918200 | orchestrator | 2025-08-29 14:57:42.918210 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-08-29 14:57:42.918220 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:00:02.324) 0:03:44.397 ********* 2025-08-29 14:57:42.918237 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.918246 | orchestrator | 2025-08-29 14:57:42.918254 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-08-29 14:57:42.918261 | orchestrator | Friday 29 August 2025 14:54:25 +0000 (0:00:01.389) 0:03:45.787 ********* 2025-08-29 14:57:42.918270 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 14:57:42.918278 | orchestrator | 2025-08-29 14:57:42.918286 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-08-29 14:57:42.918295 | orchestrator | Friday 29 August 2025 14:54:28 +0000 (0:00:02.891) 0:03:48.678 ********* 2025-08-29 14:57:42.918322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:57:42.918336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:57:42.918346 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.918356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:57:42.918379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:57:42.918391 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.918408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:57:42.918420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:57:42.918438 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.918448 | orchestrator | 2025-08-29 14:57:42.918458 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-08-29 14:57:42.918469 | orchestrator | Friday 29 August 2025 14:54:31 +0000 (0:00:02.985) 0:03:51.664 ********* 2025-08-29 14:57:42.918484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:57:42.918502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:57:42.918512 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.918523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:57:42.918541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:57:42.918552 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.918576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:57:42.918588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:57:42.918598 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.918607 | orchestrator | 2025-08-29 14:57:42.918616 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-08-29 14:57:42.918632 | orchestrator | Friday 29 August 2025 14:54:33 +0000 (0:00:02.435) 0:03:54.099 ********* 2025-08-29 14:57:42.918641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:57:42.918651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:57:42.918661 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.918675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:57:42.918687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:57:42.918697 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.918712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:57:42.918723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:57:42.918733 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.918742 | orchestrator | 2025-08-29 14:57:42.918750 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-08-29 14:57:42.918766 | orchestrator | Friday 29 August 2025 14:54:37 +0000 (0:00:03.530) 0:03:57.630 ********* 2025-08-29 14:57:42.918775 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.918784 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.918793 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.918802 | orchestrator | 2025-08-29 14:57:42.918811 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-08-29 14:57:42.918821 | orchestrator | Friday 29 August 2025 14:54:39 +0000 (0:00:01.890) 0:03:59.521 ********* 2025-08-29 14:57:42.918831 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.918840 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.918849 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.918858 | orchestrator | 2025-08-29 14:57:42.918867 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-08-29 14:57:42.918876 | orchestrator | Friday 29 August 2025 14:54:40 +0000 (0:00:01.660) 0:04:01.182 ********* 2025-08-29 14:57:42.918884 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.918894 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.918904 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.918913 | orchestrator | 2025-08-29 14:57:42.918922 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-08-29 14:57:42.918932 | orchestrator | Friday 29 August 2025 14:54:41 +0000 (0:00:00.383) 0:04:01.565 ********* 2025-08-29 14:57:42.918942 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.918951 | orchestrator | 2025-08-29 14:57:42.918960 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-08-29 14:57:42.918968 | orchestrator | Friday 29 August 2025 14:54:42 +0000 (0:00:01.527) 0:04:03.093 ********* 2025-08-29 14:57:42.918979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 14:57:42.918995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 14:57:42.919076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 14:57:42.919095 | orchestrator | 2025-08-29 14:57:42.919104 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-08-29 14:57:42.919112 | orchestrator | Friday 29 August 2025 14:54:44 +0000 (0:00:01.519) 0:04:04.612 ********* 2025-08-29 14:57:42.919122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 14:57:42.919132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 14:57:42.919141 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.919150 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.919159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 14:57:42.919169 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.919178 | orchestrator | 2025-08-29 14:57:42.919187 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-08-29 14:57:42.919196 | orchestrator | Friday 29 August 2025 14:54:44 +0000 (0:00:00.438) 0:04:05.051 ********* 2025-08-29 14:57:42.919210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 14:57:42.919221 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.919232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 14:57:42.919241 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.919256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 14:57:42.919273 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.919283 | orchestrator | 2025-08-29 14:57:42.919292 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-08-29 14:57:42.919301 | orchestrator | Friday 29 August 2025 14:54:45 +0000 (0:00:00.974) 0:04:06.026 ********* 2025-08-29 14:57:42.919310 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.919319 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.919328 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.919336 | orchestrator | 2025-08-29 14:57:42.919345 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-08-29 14:57:42.919354 | orchestrator | Friday 29 August 2025 14:54:46 +0000 (0:00:00.496) 0:04:06.522 ********* 2025-08-29 14:57:42.919364 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.919373 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.919382 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.919391 | orchestrator | 2025-08-29 14:57:42.919400 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-08-29 14:57:42.919410 | orchestrator | Friday 29 August 2025 14:54:47 +0000 (0:00:01.402) 0:04:07.924 ********* 2025-08-29 14:57:42.919419 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.919428 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.919437 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.919444 | orchestrator | 2025-08-29 14:57:42.919451 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-08-29 14:57:42.919459 | orchestrator | Friday 29 August 2025 14:54:47 +0000 (0:00:00.321) 0:04:08.245 ********* 2025-08-29 14:57:42.919467 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.919475 | orchestrator | 2025-08-29 14:57:42.919482 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-08-29 14:57:42.919490 | orchestrator | Friday 29 August 2025 14:54:49 +0000 (0:00:01.673) 0:04:09.919 ********* 2025-08-29 14:57:42.919499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 14:57:42.919510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 14:57:42.919558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:57:42.919594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:57:42.919645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.919655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.919684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.919711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:57:42.919720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.919730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:57:42.919777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:57:42.919792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.919802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:57:42.919837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:57:42.919855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.919871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:57:42.919881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 14:57:42.919901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:57:42.919921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:57:42.919936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:57:42.919978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.919993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:57:42.920057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:57:42.920126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:57:42.920136 | orchestrator | 2025-08-29 14:57:42.920146 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-08-29 14:57:42.920155 | orchestrator | Friday 29 August 2025 14:54:54 +0000 (0:00:05.482) 0:04:15.401 ********* 2025-08-29 14:57:42.920164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 14:57:42.920181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 14:57:42.920220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:57:42.920259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:57:42.920316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:57:42.920371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:57:42.920497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:57:42.920564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:57:42.920588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920600 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.920608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:57:42.920616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:57:42.920624 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.920635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 14:57:42.920648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:57:42.920684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.920964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.920988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:57:42.920996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.921004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:57:42.921061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:57:42.921076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.921109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:57:42.921125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:57:42.921133 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.921141 | orchestrator | 2025-08-29 14:57:42.921149 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-08-29 14:57:42.921157 | orchestrator | Friday 29 August 2025 14:54:56 +0000 (0:00:01.819) 0:04:17.221 ********* 2025-08-29 14:57:42.921165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:57:42.921173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:57:42.921182 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.921189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:57:42.921196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:57:42.921204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:57:42.921212 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.921220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:57:42.921227 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.921235 | orchestrator | 2025-08-29 14:57:42.921242 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-08-29 14:57:42.921250 | orchestrator | Friday 29 August 2025 14:54:59 +0000 (0:00:02.293) 0:04:19.514 ********* 2025-08-29 14:57:42.921256 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.921264 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.921271 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.921279 | orchestrator | 2025-08-29 14:57:42.921285 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-08-29 14:57:42.921293 | orchestrator | Friday 29 August 2025 14:55:00 +0000 (0:00:01.398) 0:04:20.913 ********* 2025-08-29 14:57:42.921300 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.921307 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.921318 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.921324 | orchestrator | 2025-08-29 14:57:42.921331 | orchestrator | TASK [include_role : placement] ************************************************ 2025-08-29 14:57:42.921338 | orchestrator | Friday 29 August 2025 14:55:02 +0000 (0:00:02.265) 0:04:23.178 ********* 2025-08-29 14:57:42.921345 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.921351 | orchestrator | 2025-08-29 14:57:42.921358 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-08-29 14:57:42.921371 | orchestrator | Friday 29 August 2025 14:55:04 +0000 (0:00:01.350) 0:04:24.528 ********* 2025-08-29 14:57:42.921402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.921411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.921419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.921426 | orchestrator | 2025-08-29 14:57:42.921434 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-08-29 14:57:42.921441 | orchestrator | Friday 29 August 2025 14:55:08 +0000 (0:00:04.321) 0:04:28.850 ********* 2025-08-29 14:57:42.921451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.921464 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.921489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.921498 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.921506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.921513 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.921520 | orchestrator | 2025-08-29 14:57:42.921527 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-08-29 14:57:42.921533 | orchestrator | Friday 29 August 2025 14:55:08 +0000 (0:00:00.578) 0:04:29.428 ********* 2025-08-29 14:57:42.921540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:57:42.921547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:57:42.921555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:57:42.921562 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.921570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:57:42.921578 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.921585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:57:42.921593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:57:42.921601 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.921617 | orchestrator | 2025-08-29 14:57:42.921625 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-08-29 14:57:42.921632 | orchestrator | Friday 29 August 2025 14:55:09 +0000 (0:00:00.772) 0:04:30.201 ********* 2025-08-29 14:57:42.921640 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.921647 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.921654 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.921662 | orchestrator | 2025-08-29 14:57:42.921670 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-08-29 14:57:42.921678 | orchestrator | Friday 29 August 2025 14:55:10 +0000 (0:00:01.205) 0:04:31.406 ********* 2025-08-29 14:57:42.921691 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.921698 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.921706 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.921714 | orchestrator | 2025-08-29 14:57:42.921722 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-08-29 14:57:42.921730 | orchestrator | Friday 29 August 2025 14:55:13 +0000 (0:00:02.192) 0:04:33.598 ********* 2025-08-29 14:57:42.921739 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.921748 | orchestrator | 2025-08-29 14:57:42.921755 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-08-29 14:57:42.921763 | orchestrator | Friday 29 August 2025 14:55:14 +0000 (0:00:01.690) 0:04:35.289 ********* 2025-08-29 14:57:42.921800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.921811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.921821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.921836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.921880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.921893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.921902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.921913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.921928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.921938 | orchestrator | 2025-08-29 14:57:42.921948 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-08-29 14:57:42.921958 | orchestrator | Friday 29 August 2025 14:55:19 +0000 (0:00:04.794) 0:04:40.083 ********* 2025-08-29 14:57:42.921994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.922004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.922052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.922063 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.922073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.922092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.922109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.922118 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.922156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.922167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.922175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.922189 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.922197 | orchestrator | 2025-08-29 14:57:42.922205 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-08-29 14:57:42.922213 | orchestrator | Friday 29 August 2025 14:55:20 +0000 (0:00:01.397) 0:04:41.482 ********* 2025-08-29 14:57:42.922221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:57:42.922233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:57:42.922242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:57:42.922251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:57:42.922259 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.922271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:57:42.922279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:57:42.922288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:57:42.922295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:57:42.922327 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.922335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:57:42.922343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:57:42.922352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:57:42.922360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:57:42.922368 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.922375 | orchestrator | 2025-08-29 14:57:42.922388 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-08-29 14:57:42.922396 | orchestrator | Friday 29 August 2025 14:55:21 +0000 (0:00:00.914) 0:04:42.396 ********* 2025-08-29 14:57:42.922403 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.922410 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.922418 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.922425 | orchestrator | 2025-08-29 14:57:42.922433 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-08-29 14:57:42.922441 | orchestrator | Friday 29 August 2025 14:55:23 +0000 (0:00:01.372) 0:04:43.769 ********* 2025-08-29 14:57:42.922448 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.922455 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.922462 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.922469 | orchestrator | 2025-08-29 14:57:42.922477 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-08-29 14:57:42.922484 | orchestrator | Friday 29 August 2025 14:55:25 +0000 (0:00:02.164) 0:04:45.934 ********* 2025-08-29 14:57:42.922491 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.922498 | orchestrator | 2025-08-29 14:57:42.922505 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-08-29 14:57:42.922513 | orchestrator | Friday 29 August 2025 14:55:27 +0000 (0:00:01.696) 0:04:47.630 ********* 2025-08-29 14:57:42.922520 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-08-29 14:57:42.922529 | orchestrator | 2025-08-29 14:57:42.922536 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-08-29 14:57:42.922543 | orchestrator | Friday 29 August 2025 14:55:27 +0000 (0:00:00.857) 0:04:48.487 ********* 2025-08-29 14:57:42.922551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 14:57:42.922564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 14:57:42.922574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 14:57:42.922582 | orchestrator | 2025-08-29 14:57:42.922589 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-08-29 14:57:42.922597 | orchestrator | Friday 29 August 2025 14:55:32 +0000 (0:00:04.897) 0:04:53.385 ********* 2025-08-29 14:57:42.922633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:57:42.922649 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.922657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:57:42.922664 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.922671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:57:42.922678 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.922686 | orchestrator | 2025-08-29 14:57:42.922693 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-08-29 14:57:42.922700 | orchestrator | Friday 29 August 2025 14:55:33 +0000 (0:00:01.037) 0:04:54.423 ********* 2025-08-29 14:57:42.922706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:57:42.922714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:57:42.922722 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.922730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:57:42.922737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:57:42.922745 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.922752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:57:42.922760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:57:42.922772 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.922779 | orchestrator | 2025-08-29 14:57:42.922787 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 14:57:42.922794 | orchestrator | Friday 29 August 2025 14:55:35 +0000 (0:00:01.665) 0:04:56.088 ********* 2025-08-29 14:57:42.922803 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.922811 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.922818 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.922831 | orchestrator | 2025-08-29 14:57:42.922839 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 14:57:42.922846 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:02.479) 0:04:58.567 ********* 2025-08-29 14:57:42.922854 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.922862 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.922869 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.922877 | orchestrator | 2025-08-29 14:57:42.922884 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-08-29 14:57:42.922892 | orchestrator | Friday 29 August 2025 14:55:41 +0000 (0:00:03.091) 0:05:01.658 ********* 2025-08-29 14:57:42.922923 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-08-29 14:57:42.922933 | orchestrator | 2025-08-29 14:57:42.922941 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-08-29 14:57:42.922948 | orchestrator | Friday 29 August 2025 14:55:42 +0000 (0:00:01.608) 0:05:03.267 ********* 2025-08-29 14:57:42.922957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:57:42.922965 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.922973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:57:42.922981 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.922989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:57:42.922997 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.923005 | orchestrator | 2025-08-29 14:57:42.923065 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-08-29 14:57:42.923075 | orchestrator | Friday 29 August 2025 14:55:44 +0000 (0:00:01.281) 0:05:04.548 ********* 2025-08-29 14:57:42.923083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:57:42.923091 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.923099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:57:42.923122 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.923130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:57:42.923139 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.923146 | orchestrator | 2025-08-29 14:57:42.923154 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-08-29 14:57:42.923162 | orchestrator | Friday 29 August 2025 14:55:45 +0000 (0:00:01.675) 0:05:06.223 ********* 2025-08-29 14:57:42.923170 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.923177 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.923184 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.923192 | orchestrator | 2025-08-29 14:57:42.923224 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 14:57:42.923234 | orchestrator | Friday 29 August 2025 14:55:47 +0000 (0:00:02.088) 0:05:08.312 ********* 2025-08-29 14:57:42.923242 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.923250 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.923258 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.923266 | orchestrator | 2025-08-29 14:57:42.923274 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 14:57:42.923281 | orchestrator | Friday 29 August 2025 14:55:50 +0000 (0:00:02.713) 0:05:11.026 ********* 2025-08-29 14:57:42.923288 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.923296 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.923304 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.923311 | orchestrator | 2025-08-29 14:57:42.923319 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-08-29 14:57:42.923326 | orchestrator | Friday 29 August 2025 14:55:53 +0000 (0:00:03.417) 0:05:14.443 ********* 2025-08-29 14:57:42.923334 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-08-29 14:57:42.923342 | orchestrator | 2025-08-29 14:57:42.923350 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-08-29 14:57:42.923358 | orchestrator | Friday 29 August 2025 14:55:54 +0000 (0:00:00.979) 0:05:15.423 ********* 2025-08-29 14:57:42.923366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:57:42.923375 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.923383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:57:42.923397 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.923406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:57:42.923413 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.923421 | orchestrator | 2025-08-29 14:57:42.923428 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-08-29 14:57:42.923436 | orchestrator | Friday 29 August 2025 14:55:56 +0000 (0:00:01.649) 0:05:17.072 ********* 2025-08-29 14:57:42.923447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:57:42.923455 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.923462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:57:42.923471 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.923501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:57:42.923511 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.923518 | orchestrator | 2025-08-29 14:57:42.923526 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-08-29 14:57:42.923533 | orchestrator | Friday 29 August 2025 14:55:58 +0000 (0:00:01.482) 0:05:18.555 ********* 2025-08-29 14:57:42.923541 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.923548 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.923556 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.923563 | orchestrator | 2025-08-29 14:57:42.923571 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 14:57:42.923578 | orchestrator | Friday 29 August 2025 14:55:59 +0000 (0:00:01.762) 0:05:20.317 ********* 2025-08-29 14:57:42.923585 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.923593 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.923601 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.923609 | orchestrator | 2025-08-29 14:57:42.923617 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 14:57:42.923631 | orchestrator | Friday 29 August 2025 14:56:02 +0000 (0:00:02.775) 0:05:23.092 ********* 2025-08-29 14:57:42.923639 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.923646 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.923654 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.923661 | orchestrator | 2025-08-29 14:57:42.923669 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-08-29 14:57:42.923677 | orchestrator | Friday 29 August 2025 14:56:06 +0000 (0:00:03.506) 0:05:26.599 ********* 2025-08-29 14:57:42.923684 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.923692 | orchestrator | 2025-08-29 14:57:42.923700 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-08-29 14:57:42.923708 | orchestrator | Friday 29 August 2025 14:56:07 +0000 (0:00:01.725) 0:05:28.324 ********* 2025-08-29 14:57:42.923716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.923725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:57:42.923737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.923769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.923780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.923797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.923807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:57:42.923816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.923830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.923858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.923868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.923884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:57:42.923893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.923903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.923916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.923925 | orchestrator | 2025-08-29 14:57:42.923934 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-08-29 14:57:42.923942 | orchestrator | Friday 29 August 2025 14:56:11 +0000 (0:00:03.689) 0:05:32.013 ********* 2025-08-29 14:57:42.923965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.923977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:57:42.923984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.923991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.923998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.924005 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.924032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.924061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:57:42.924068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.924079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.924086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.924093 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.924100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.924111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:57:42.924118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.924147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:57:42.924161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:57:42.924169 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.924176 | orchestrator | 2025-08-29 14:57:42.924184 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-08-29 14:57:42.924190 | orchestrator | Friday 29 August 2025 14:56:12 +0000 (0:00:00.804) 0:05:32.818 ********* 2025-08-29 14:57:42.924198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:57:42.924206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:57:42.924213 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.924220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:57:42.924227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:57:42.924234 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.924241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:57:42.924248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:57:42.924255 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.924263 | orchestrator | 2025-08-29 14:57:42.924270 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-08-29 14:57:42.924280 | orchestrator | Friday 29 August 2025 14:56:13 +0000 (0:00:01.616) 0:05:34.435 ********* 2025-08-29 14:57:42.924288 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.924296 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.924303 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.924310 | orchestrator | 2025-08-29 14:57:42.924316 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-08-29 14:57:42.924324 | orchestrator | Friday 29 August 2025 14:56:15 +0000 (0:00:01.326) 0:05:35.761 ********* 2025-08-29 14:57:42.924330 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.924337 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.924352 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.924365 | orchestrator | 2025-08-29 14:57:42.924373 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-08-29 14:57:42.924381 | orchestrator | Friday 29 August 2025 14:56:17 +0000 (0:00:02.653) 0:05:38.415 ********* 2025-08-29 14:57:42.924389 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.924397 | orchestrator | 2025-08-29 14:57:42.924405 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-08-29 14:57:42.924412 | orchestrator | Friday 29 August 2025 14:56:19 +0000 (0:00:01.506) 0:05:39.921 ********* 2025-08-29 14:57:42.924447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 14:57:42.924458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 14:57:42.924467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 14:57:42.924477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 14:57:42.924580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 14:57:42.924593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 14:57:42.924602 | orchestrator | 2025-08-29 14:57:42.924611 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-08-29 14:57:42.924619 | orchestrator | Friday 29 August 2025 14:56:25 +0000 (0:00:05.689) 0:05:45.611 ********* 2025-08-29 14:57:42.924627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 14:57:42.924636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 14:57:42.924649 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.924662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 14:57:42.924693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 14:57:42.924702 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.924711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 14:57:42.924720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 14:57:42.924732 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.924740 | orchestrator | 2025-08-29 14:57:42.924748 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-08-29 14:57:42.924755 | orchestrator | Friday 29 August 2025 14:56:25 +0000 (0:00:00.671) 0:05:46.282 ********* 2025-08-29 14:57:42.924767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 14:57:42.924777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:57:42.924786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:57:42.924794 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.924802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 14:57:42.924831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:57:42.924840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:57:42.924848 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.924856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 14:57:42.924864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:57:42.924872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:57:42.924880 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.924888 | orchestrator | 2025-08-29 14:57:42.924896 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-08-29 14:57:42.924904 | orchestrator | Friday 29 August 2025 14:56:26 +0000 (0:00:00.947) 0:05:47.230 ********* 2025-08-29 14:57:42.924912 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.924920 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.924927 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.924934 | orchestrator | 2025-08-29 14:57:42.924941 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-08-29 14:57:42.924949 | orchestrator | Friday 29 August 2025 14:56:27 +0000 (0:00:00.809) 0:05:48.040 ********* 2025-08-29 14:57:42.924956 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.924964 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.924977 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.924985 | orchestrator | 2025-08-29 14:57:42.924992 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-08-29 14:57:42.925000 | orchestrator | Friday 29 August 2025 14:56:29 +0000 (0:00:01.463) 0:05:49.504 ********* 2025-08-29 14:57:42.925008 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.925032 | orchestrator | 2025-08-29 14:57:42.925040 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-08-29 14:57:42.925047 | orchestrator | Friday 29 August 2025 14:56:30 +0000 (0:00:01.470) 0:05:50.974 ********* 2025-08-29 14:57:42.925056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 14:57:42.925069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:57:42.925078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:57:42.925129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 14:57:42.925142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:57:42.925149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 14:57:42.925196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:57:42.925205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:57:42.925213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:57:42.925242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 14:57:42.925254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:57:42.925262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:57:42.925289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 14:57:42.925303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:57:42.925317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 14:57:42.925324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:57:42.925344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:57:42.925366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:57:42.925386 | orchestrator | 2025-08-29 14:57:42.925394 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-08-29 14:57:42.925400 | orchestrator | Friday 29 August 2025 14:56:34 +0000 (0:00:04.133) 0:05:55.108 ********* 2025-08-29 14:57:42.925456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 14:57:42.925480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:57:42.925488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 14:57:42.925506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:57:42.925526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:57:42.925540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 14:57:42.925556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:57:42.925575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 14:57:42.925598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:57:42.925605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:57:42.925620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:57:42.925630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 14:57:42.925638 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.925646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:57:42.925671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:57:42.925703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:57:42.925711 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.925723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 14:57:42.925736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:57:42.925743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:57:42.925758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:57:42.925765 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.925772 | orchestrator | 2025-08-29 14:57:42.925779 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-08-29 14:57:42.925786 | orchestrator | Friday 29 August 2025 14:56:35 +0000 (0:00:01.158) 0:05:56.266 ********* 2025-08-29 14:57:42.925793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 14:57:42.925804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 14:57:42.925811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:57:42.925823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 14:57:42.925831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 14:57:42.925842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:57:42.925849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 14:57:42.925857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 14:57:42.925863 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.925871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:57:42.925878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:57:42.925885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:57:42.925892 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.925900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:57:42.925907 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.925914 | orchestrator | 2025-08-29 14:57:42.925921 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-08-29 14:57:42.925928 | orchestrator | Friday 29 August 2025 14:56:36 +0000 (0:00:01.103) 0:05:57.370 ********* 2025-08-29 14:57:42.925936 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.925942 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.925949 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.925956 | orchestrator | 2025-08-29 14:57:42.925962 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-08-29 14:57:42.925969 | orchestrator | Friday 29 August 2025 14:56:37 +0000 (0:00:00.494) 0:05:57.865 ********* 2025-08-29 14:57:42.925977 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.925984 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.925990 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.925997 | orchestrator | 2025-08-29 14:57:42.926005 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-08-29 14:57:42.926078 | orchestrator | Friday 29 August 2025 14:56:38 +0000 (0:00:01.488) 0:05:59.354 ********* 2025-08-29 14:57:42.926090 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.926103 | orchestrator | 2025-08-29 14:57:42.926111 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-08-29 14:57:42.926118 | orchestrator | Friday 29 August 2025 14:56:40 +0000 (0:00:01.830) 0:06:01.184 ********* 2025-08-29 14:57:42.926131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:57:42.926146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:57:42.926154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:57:42.926161 | orchestrator | 2025-08-29 14:57:42.926168 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-08-29 14:57:42.926175 | orchestrator | Friday 29 August 2025 14:56:43 +0000 (0:00:02.585) 0:06:03.770 ********* 2025-08-29 14:57:42.926183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 14:57:42.926199 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.926210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 14:57:42.926218 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.926230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 14:57:42.926238 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.926245 | orchestrator | 2025-08-29 14:57:42.926252 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-08-29 14:57:42.926258 | orchestrator | Friday 29 August 2025 14:56:43 +0000 (0:00:00.395) 0:06:04.166 ********* 2025-08-29 14:57:42.926266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 14:57:42.926273 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.926280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 14:57:42.926288 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.926295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 14:57:42.926302 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.926310 | orchestrator | 2025-08-29 14:57:42.926316 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-08-29 14:57:42.926323 | orchestrator | Friday 29 August 2025 14:56:44 +0000 (0:00:01.281) 0:06:05.447 ********* 2025-08-29 14:57:42.926330 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.926344 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.926351 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.926359 | orchestrator | 2025-08-29 14:57:42.926366 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-08-29 14:57:42.926373 | orchestrator | Friday 29 August 2025 14:56:45 +0000 (0:00:00.614) 0:06:06.061 ********* 2025-08-29 14:57:42.926380 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.926388 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.926395 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.926402 | orchestrator | 2025-08-29 14:57:42.926408 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-08-29 14:57:42.926415 | orchestrator | Friday 29 August 2025 14:56:46 +0000 (0:00:01.387) 0:06:07.448 ********* 2025-08-29 14:57:42.926422 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:42.926429 | orchestrator | 2025-08-29 14:57:42.926436 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-08-29 14:57:42.926443 | orchestrator | Friday 29 August 2025 14:56:48 +0000 (0:00:01.923) 0:06:09.372 ********* 2025-08-29 14:57:42.926454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.926468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.926476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.926484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.926497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.926507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 14:57:42.926514 | orchestrator | 2025-08-29 14:57:42.926525 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-08-29 14:57:42.926532 | orchestrator | Friday 29 August 2025 14:56:55 +0000 (0:00:06.891) 0:06:16.263 ********* 2025-08-29 14:57:42.926539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.926547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.926560 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.926567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.926579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.926586 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.926598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.926605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 14:57:42.926617 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.926624 | orchestrator | 2025-08-29 14:57:42.926631 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-08-29 14:57:42.926638 | orchestrator | Friday 29 August 2025 14:56:56 +0000 (0:00:00.684) 0:06:16.947 ********* 2025-08-29 14:57:42.926645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:57:42.926653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:57:42.926660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:57:42.926668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:57:42.926675 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.926682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:57:42.926692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:57:42.926700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:57:42.926706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:57:42.926714 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.926721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:57:42.926731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:57:42.926739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:57:42.926746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:57:42.926758 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.926765 | orchestrator | 2025-08-29 14:57:42.926773 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-08-29 14:57:42.926780 | orchestrator | Friday 29 August 2025 14:56:58 +0000 (0:00:01.683) 0:06:18.631 ********* 2025-08-29 14:57:42.926787 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.926793 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.926800 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.926807 | orchestrator | 2025-08-29 14:57:42.926814 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-08-29 14:57:42.926821 | orchestrator | Friday 29 August 2025 14:56:59 +0000 (0:00:01.286) 0:06:19.917 ********* 2025-08-29 14:57:42.926828 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.926835 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.926842 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.926849 | orchestrator | 2025-08-29 14:57:42.926856 | orchestrator | TASK [include_role : swift] **************************************************** 2025-08-29 14:57:42.926864 | orchestrator | Friday 29 August 2025 14:57:01 +0000 (0:00:02.054) 0:06:21.972 ********* 2025-08-29 14:57:42.926871 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.926878 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.926885 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.926892 | orchestrator | 2025-08-29 14:57:42.926899 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-08-29 14:57:42.926906 | orchestrator | Friday 29 August 2025 14:57:01 +0000 (0:00:00.329) 0:06:22.302 ********* 2025-08-29 14:57:42.926913 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.926920 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.926927 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.926934 | orchestrator | 2025-08-29 14:57:42.926941 | orchestrator | TASK [include_role : trove] **************************************************** 2025-08-29 14:57:42.926949 | orchestrator | Friday 29 August 2025 14:57:02 +0000 (0:00:00.291) 0:06:22.593 ********* 2025-08-29 14:57:42.926956 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.926963 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.926970 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.926977 | orchestrator | 2025-08-29 14:57:42.926984 | orchestrator | TASK [include_role : venus] **************************************************** 2025-08-29 14:57:42.926991 | orchestrator | Friday 29 August 2025 14:57:02 +0000 (0:00:00.507) 0:06:23.101 ********* 2025-08-29 14:57:42.926998 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.927005 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.927025 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.927033 | orchestrator | 2025-08-29 14:57:42.927040 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-08-29 14:57:42.927047 | orchestrator | Friday 29 August 2025 14:57:02 +0000 (0:00:00.272) 0:06:23.373 ********* 2025-08-29 14:57:42.927054 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.927061 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.927068 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.927074 | orchestrator | 2025-08-29 14:57:42.927081 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-08-29 14:57:42.927088 | orchestrator | Friday 29 August 2025 14:57:03 +0000 (0:00:00.286) 0:06:23.660 ********* 2025-08-29 14:57:42.927095 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.927102 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.927109 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.927117 | orchestrator | 2025-08-29 14:57:42.927124 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-08-29 14:57:42.927131 | orchestrator | Friday 29 August 2025 14:57:03 +0000 (0:00:00.760) 0:06:24.420 ********* 2025-08-29 14:57:42.927138 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.927150 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.927156 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.927164 | orchestrator | 2025-08-29 14:57:42.927171 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-08-29 14:57:42.927180 | orchestrator | Friday 29 August 2025 14:57:04 +0000 (0:00:00.687) 0:06:25.108 ********* 2025-08-29 14:57:42.927188 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.927194 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.927202 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.927209 | orchestrator | 2025-08-29 14:57:42.927216 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-08-29 14:57:42.927223 | orchestrator | Friday 29 August 2025 14:57:04 +0000 (0:00:00.306) 0:06:25.415 ********* 2025-08-29 14:57:42.927229 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.927236 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.927243 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.927250 | orchestrator | 2025-08-29 14:57:42.927257 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-08-29 14:57:42.927264 | orchestrator | Friday 29 August 2025 14:57:05 +0000 (0:00:00.849) 0:06:26.265 ********* 2025-08-29 14:57:42.927271 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.927278 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.927285 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.927292 | orchestrator | 2025-08-29 14:57:42.927299 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-08-29 14:57:42.927306 | orchestrator | Friday 29 August 2025 14:57:06 +0000 (0:00:01.086) 0:06:27.351 ********* 2025-08-29 14:57:42.927313 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.927320 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.927331 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.927338 | orchestrator | 2025-08-29 14:57:42.927345 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-08-29 14:57:42.927352 | orchestrator | Friday 29 August 2025 14:57:07 +0000 (0:00:00.890) 0:06:28.241 ********* 2025-08-29 14:57:42.927359 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.927366 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.927371 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.927377 | orchestrator | 2025-08-29 14:57:42.927384 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-08-29 14:57:42.927390 | orchestrator | Friday 29 August 2025 14:57:12 +0000 (0:00:04.569) 0:06:32.811 ********* 2025-08-29 14:57:42.927398 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.927405 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.927412 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.927418 | orchestrator | 2025-08-29 14:57:42.927424 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-08-29 14:57:42.927431 | orchestrator | Friday 29 August 2025 14:57:15 +0000 (0:00:02.757) 0:06:35.569 ********* 2025-08-29 14:57:42.927438 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.927444 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.927451 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.927457 | orchestrator | 2025-08-29 14:57:42.927464 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-08-29 14:57:42.927470 | orchestrator | Friday 29 August 2025 14:57:23 +0000 (0:00:08.267) 0:06:43.836 ********* 2025-08-29 14:57:42.927478 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.927485 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.927497 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.927504 | orchestrator | 2025-08-29 14:57:42.927511 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-08-29 14:57:42.927518 | orchestrator | Friday 29 August 2025 14:57:28 +0000 (0:00:04.907) 0:06:48.744 ********* 2025-08-29 14:57:42.927525 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:42.927533 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:42.927540 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:42.927552 | orchestrator | 2025-08-29 14:57:42.927559 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-08-29 14:57:42.927566 | orchestrator | Friday 29 August 2025 14:57:32 +0000 (0:00:04.114) 0:06:52.859 ********* 2025-08-29 14:57:42.927573 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.927579 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.927587 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.927593 | orchestrator | 2025-08-29 14:57:42.927601 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-08-29 14:57:42.927609 | orchestrator | Friday 29 August 2025 14:57:32 +0000 (0:00:00.308) 0:06:53.167 ********* 2025-08-29 14:57:42.927615 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.927623 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.927630 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.927637 | orchestrator | 2025-08-29 14:57:42.927645 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-08-29 14:57:42.927652 | orchestrator | Friday 29 August 2025 14:57:32 +0000 (0:00:00.310) 0:06:53.478 ********* 2025-08-29 14:57:42.927660 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.927667 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.927674 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.927681 | orchestrator | 2025-08-29 14:57:42.927688 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-08-29 14:57:42.927696 | orchestrator | Friday 29 August 2025 14:57:33 +0000 (0:00:00.574) 0:06:54.052 ********* 2025-08-29 14:57:42.927703 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.927710 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.927718 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.927725 | orchestrator | 2025-08-29 14:57:42.927733 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-08-29 14:57:42.927739 | orchestrator | Friday 29 August 2025 14:57:33 +0000 (0:00:00.285) 0:06:54.337 ********* 2025-08-29 14:57:42.927747 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.927754 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.927762 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.927768 | orchestrator | 2025-08-29 14:57:42.927776 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-08-29 14:57:42.927783 | orchestrator | Friday 29 August 2025 14:57:34 +0000 (0:00:00.314) 0:06:54.651 ********* 2025-08-29 14:57:42.927791 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:42.927798 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:42.927805 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:42.927812 | orchestrator | 2025-08-29 14:57:42.927819 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-08-29 14:57:42.927826 | orchestrator | Friday 29 August 2025 14:57:34 +0000 (0:00:00.315) 0:06:54.966 ********* 2025-08-29 14:57:42.927834 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.927845 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.927852 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.927860 | orchestrator | 2025-08-29 14:57:42.927867 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-08-29 14:57:42.927874 | orchestrator | Friday 29 August 2025 14:57:39 +0000 (0:00:05.051) 0:07:00.018 ********* 2025-08-29 14:57:42.927882 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:42.927889 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:42.927895 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:42.927902 | orchestrator | 2025-08-29 14:57:42.927910 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:57:42.927917 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 14:57:42.927925 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 14:57:42.927941 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 14:57:42.927949 | orchestrator | 2025-08-29 14:57:42.927957 | orchestrator | 2025-08-29 14:57:42.927970 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:57:42.927977 | orchestrator | Friday 29 August 2025 14:57:40 +0000 (0:00:00.898) 0:07:00.917 ********* 2025-08-29 14:57:42.927984 | orchestrator | =============================================================================== 2025-08-29 14:57:42.927991 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.27s 2025-08-29 14:57:42.927998 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 7.21s 2025-08-29 14:57:42.928006 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.89s 2025-08-29 14:57:42.928028 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.72s 2025-08-29 14:57:42.928035 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 6.18s 2025-08-29 14:57:42.928042 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.69s 2025-08-29 14:57:42.928048 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.48s 2025-08-29 14:57:42.928054 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.43s 2025-08-29 14:57:42.928061 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.31s 2025-08-29 14:57:42.928068 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.05s 2025-08-29 14:57:42.928076 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.98s 2025-08-29 14:57:42.928082 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.91s 2025-08-29 14:57:42.928089 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.90s 2025-08-29 14:57:42.928096 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.79s 2025-08-29 14:57:42.928103 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.78s 2025-08-29 14:57:42.928109 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.68s 2025-08-29 14:57:42.928116 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.57s 2025-08-29 14:57:42.928123 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.57s 2025-08-29 14:57:42.928130 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.54s 2025-08-29 14:57:42.928137 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.37s 2025-08-29 14:57:42.928143 | orchestrator | 2025-08-29 14:57:42 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:57:42.928150 | orchestrator | 2025-08-29 14:57:42 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:57:42.928157 | orchestrator | 2025-08-29 14:57:42 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:42.928164 | orchestrator | 2025-08-29 14:57:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:45.974310 | orchestrator | 2025-08-29 14:57:45 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:57:45.976340 | orchestrator | 2025-08-29 14:57:45 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:57:45.976744 | orchestrator | 2025-08-29 14:57:45 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:45.976766 | orchestrator | 2025-08-29 14:57:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:49.050575 | orchestrator | 2025-08-29 14:57:49 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:57:49.052813 | orchestrator | 2025-08-29 14:57:49 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:57:49.055391 | orchestrator | 2025-08-29 14:57:49 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:49.055481 | orchestrator | 2025-08-29 14:57:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:52.083038 | orchestrator | 2025-08-29 14:57:52 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:57:52.083245 | orchestrator | 2025-08-29 14:57:52 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:57:52.083764 | orchestrator | 2025-08-29 14:57:52 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:52.083792 | orchestrator | 2025-08-29 14:57:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:55.115850 | orchestrator | 2025-08-29 14:57:55 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:57:55.116242 | orchestrator | 2025-08-29 14:57:55 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:57:55.117058 | orchestrator | 2025-08-29 14:57:55 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:55.117097 | orchestrator | 2025-08-29 14:57:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:58.145307 | orchestrator | 2025-08-29 14:57:58 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:57:58.145542 | orchestrator | 2025-08-29 14:57:58 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:57:58.146214 | orchestrator | 2025-08-29 14:57:58 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:57:58.146251 | orchestrator | 2025-08-29 14:57:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:01.177725 | orchestrator | 2025-08-29 14:58:01 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:01.180624 | orchestrator | 2025-08-29 14:58:01 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:01.181077 | orchestrator | 2025-08-29 14:58:01 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:01.181105 | orchestrator | 2025-08-29 14:58:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:04.228351 | orchestrator | 2025-08-29 14:58:04 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:04.229375 | orchestrator | 2025-08-29 14:58:04 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:04.229866 | orchestrator | 2025-08-29 14:58:04 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:04.229884 | orchestrator | 2025-08-29 14:58:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:07.272569 | orchestrator | 2025-08-29 14:58:07 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:07.272830 | orchestrator | 2025-08-29 14:58:07 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:07.274185 | orchestrator | 2025-08-29 14:58:07 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:07.274293 | orchestrator | 2025-08-29 14:58:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:10.333481 | orchestrator | 2025-08-29 14:58:10 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:10.335385 | orchestrator | 2025-08-29 14:58:10 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:10.336699 | orchestrator | 2025-08-29 14:58:10 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:10.336739 | orchestrator | 2025-08-29 14:58:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:13.377407 | orchestrator | 2025-08-29 14:58:13 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:13.378610 | orchestrator | 2025-08-29 14:58:13 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:13.380523 | orchestrator | 2025-08-29 14:58:13 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:13.381131 | orchestrator | 2025-08-29 14:58:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:16.427106 | orchestrator | 2025-08-29 14:58:16 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:16.428294 | orchestrator | 2025-08-29 14:58:16 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:16.434548 | orchestrator | 2025-08-29 14:58:16 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:16.434626 | orchestrator | 2025-08-29 14:58:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:19.475764 | orchestrator | 2025-08-29 14:58:19 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:19.480823 | orchestrator | 2025-08-29 14:58:19 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:19.485531 | orchestrator | 2025-08-29 14:58:19 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:19.485926 | orchestrator | 2025-08-29 14:58:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:22.537593 | orchestrator | 2025-08-29 14:58:22 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:22.541244 | orchestrator | 2025-08-29 14:58:22 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:22.544412 | orchestrator | 2025-08-29 14:58:22 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:22.545534 | orchestrator | 2025-08-29 14:58:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:25.588328 | orchestrator | 2025-08-29 14:58:25 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:25.589463 | orchestrator | 2025-08-29 14:58:25 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:25.591876 | orchestrator | 2025-08-29 14:58:25 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:25.592490 | orchestrator | 2025-08-29 14:58:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:28.634271 | orchestrator | 2025-08-29 14:58:28 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:28.635366 | orchestrator | 2025-08-29 14:58:28 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:28.637072 | orchestrator | 2025-08-29 14:58:28 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:28.637085 | orchestrator | 2025-08-29 14:58:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:31.678868 | orchestrator | 2025-08-29 14:58:31 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:31.682769 | orchestrator | 2025-08-29 14:58:31 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:31.686948 | orchestrator | 2025-08-29 14:58:31 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:31.687104 | orchestrator | 2025-08-29 14:58:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:34.726745 | orchestrator | 2025-08-29 14:58:34 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:34.727667 | orchestrator | 2025-08-29 14:58:34 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:34.729133 | orchestrator | 2025-08-29 14:58:34 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:34.729157 | orchestrator | 2025-08-29 14:58:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:37.770582 | orchestrator | 2025-08-29 14:58:37 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:37.773099 | orchestrator | 2025-08-29 14:58:37 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:37.774832 | orchestrator | 2025-08-29 14:58:37 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:37.775212 | orchestrator | 2025-08-29 14:58:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:40.826600 | orchestrator | 2025-08-29 14:58:40 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:40.828686 | orchestrator | 2025-08-29 14:58:40 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:40.831321 | orchestrator | 2025-08-29 14:58:40 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:40.831459 | orchestrator | 2025-08-29 14:58:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:43.880404 | orchestrator | 2025-08-29 14:58:43 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:43.882746 | orchestrator | 2025-08-29 14:58:43 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:43.885077 | orchestrator | 2025-08-29 14:58:43 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:43.885157 | orchestrator | 2025-08-29 14:58:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:46.926007 | orchestrator | 2025-08-29 14:58:46 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:46.926209 | orchestrator | 2025-08-29 14:58:46 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:46.930666 | orchestrator | 2025-08-29 14:58:46 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:46.930766 | orchestrator | 2025-08-29 14:58:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:49.972802 | orchestrator | 2025-08-29 14:58:49 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:49.976405 | orchestrator | 2025-08-29 14:58:49 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:49.976878 | orchestrator | 2025-08-29 14:58:49 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:49.977056 | orchestrator | 2025-08-29 14:58:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:53.036882 | orchestrator | 2025-08-29 14:58:53 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:53.037993 | orchestrator | 2025-08-29 14:58:53 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:53.039929 | orchestrator | 2025-08-29 14:58:53 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:53.040012 | orchestrator | 2025-08-29 14:58:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:56.078333 | orchestrator | 2025-08-29 14:58:56 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:56.080911 | orchestrator | 2025-08-29 14:58:56 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:56.082974 | orchestrator | 2025-08-29 14:58:56 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:56.083018 | orchestrator | 2025-08-29 14:58:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:59.128551 | orchestrator | 2025-08-29 14:58:59 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:58:59.129243 | orchestrator | 2025-08-29 14:58:59 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:58:59.130960 | orchestrator | 2025-08-29 14:58:59 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:58:59.130989 | orchestrator | 2025-08-29 14:58:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:02.175603 | orchestrator | 2025-08-29 14:59:02 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:02.177129 | orchestrator | 2025-08-29 14:59:02 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:02.179093 | orchestrator | 2025-08-29 14:59:02 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:02.179130 | orchestrator | 2025-08-29 14:59:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:05.237207 | orchestrator | 2025-08-29 14:59:05 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:05.240628 | orchestrator | 2025-08-29 14:59:05 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:05.245366 | orchestrator | 2025-08-29 14:59:05 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:05.245429 | orchestrator | 2025-08-29 14:59:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:08.301700 | orchestrator | 2025-08-29 14:59:08 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:08.303741 | orchestrator | 2025-08-29 14:59:08 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:08.305474 | orchestrator | 2025-08-29 14:59:08 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:08.305522 | orchestrator | 2025-08-29 14:59:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:11.361642 | orchestrator | 2025-08-29 14:59:11 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:11.364422 | orchestrator | 2025-08-29 14:59:11 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:11.368029 | orchestrator | 2025-08-29 14:59:11 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:11.368523 | orchestrator | 2025-08-29 14:59:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:14.410344 | orchestrator | 2025-08-29 14:59:14 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:14.413372 | orchestrator | 2025-08-29 14:59:14 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:14.415987 | orchestrator | 2025-08-29 14:59:14 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:14.416058 | orchestrator | 2025-08-29 14:59:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:17.469079 | orchestrator | 2025-08-29 14:59:17 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:17.475084 | orchestrator | 2025-08-29 14:59:17 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:17.477351 | orchestrator | 2025-08-29 14:59:17 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:17.477752 | orchestrator | 2025-08-29 14:59:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:20.525525 | orchestrator | 2025-08-29 14:59:20 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:20.526765 | orchestrator | 2025-08-29 14:59:20 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:20.528344 | orchestrator | 2025-08-29 14:59:20 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:20.528403 | orchestrator | 2025-08-29 14:59:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:23.573454 | orchestrator | 2025-08-29 14:59:23 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:23.575118 | orchestrator | 2025-08-29 14:59:23 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:23.576101 | orchestrator | 2025-08-29 14:59:23 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:23.576137 | orchestrator | 2025-08-29 14:59:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:26.617357 | orchestrator | 2025-08-29 14:59:26 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:26.618000 | orchestrator | 2025-08-29 14:59:26 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:26.619616 | orchestrator | 2025-08-29 14:59:26 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:26.619651 | orchestrator | 2025-08-29 14:59:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:29.667976 | orchestrator | 2025-08-29 14:59:29 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:29.669667 | orchestrator | 2025-08-29 14:59:29 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:29.671565 | orchestrator | 2025-08-29 14:59:29 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:29.671769 | orchestrator | 2025-08-29 14:59:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:32.724240 | orchestrator | 2025-08-29 14:59:32 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:32.727103 | orchestrator | 2025-08-29 14:59:32 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:32.730160 | orchestrator | 2025-08-29 14:59:32 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:32.730234 | orchestrator | 2025-08-29 14:59:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:35.782431 | orchestrator | 2025-08-29 14:59:35 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:35.786002 | orchestrator | 2025-08-29 14:59:35 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:35.788299 | orchestrator | 2025-08-29 14:59:35 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:35.788396 | orchestrator | 2025-08-29 14:59:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:38.836954 | orchestrator | 2025-08-29 14:59:38 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:38.838469 | orchestrator | 2025-08-29 14:59:38 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:38.840148 | orchestrator | 2025-08-29 14:59:38 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:38.840213 | orchestrator | 2025-08-29 14:59:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:41.890491 | orchestrator | 2025-08-29 14:59:41 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:41.891684 | orchestrator | 2025-08-29 14:59:41 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:41.893110 | orchestrator | 2025-08-29 14:59:41 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:41.893297 | orchestrator | 2025-08-29 14:59:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:44.941599 | orchestrator | 2025-08-29 14:59:44 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:44.943637 | orchestrator | 2025-08-29 14:59:44 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:44.945329 | orchestrator | 2025-08-29 14:59:44 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:44.945821 | orchestrator | 2025-08-29 14:59:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:48.003006 | orchestrator | 2025-08-29 14:59:48 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:48.005643 | orchestrator | 2025-08-29 14:59:48 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:48.007771 | orchestrator | 2025-08-29 14:59:48 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:48.008116 | orchestrator | 2025-08-29 14:59:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:51.063232 | orchestrator | 2025-08-29 14:59:51 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:51.065657 | orchestrator | 2025-08-29 14:59:51 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:51.068679 | orchestrator | 2025-08-29 14:59:51 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:51.068776 | orchestrator | 2025-08-29 14:59:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:54.121228 | orchestrator | 2025-08-29 14:59:54 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:54.122775 | orchestrator | 2025-08-29 14:59:54 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:54.124377 | orchestrator | 2025-08-29 14:59:54 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:54.124569 | orchestrator | 2025-08-29 14:59:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:57.169151 | orchestrator | 2025-08-29 14:59:57 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 14:59:57.169992 | orchestrator | 2025-08-29 14:59:57 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 14:59:57.171135 | orchestrator | 2025-08-29 14:59:57 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 14:59:57.171448 | orchestrator | 2025-08-29 14:59:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:00.216100 | orchestrator | 2025-08-29 15:00:00 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:00.218215 | orchestrator | 2025-08-29 15:00:00 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 15:00:00.219931 | orchestrator | 2025-08-29 15:00:00 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 15:00:00.219993 | orchestrator | 2025-08-29 15:00:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:03.264560 | orchestrator | 2025-08-29 15:00:03 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:03.265992 | orchestrator | 2025-08-29 15:00:03 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 15:00:03.267017 | orchestrator | 2025-08-29 15:00:03 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 15:00:03.267200 | orchestrator | 2025-08-29 15:00:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:06.313388 | orchestrator | 2025-08-29 15:00:06 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:06.314243 | orchestrator | 2025-08-29 15:00:06 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 15:00:06.315896 | orchestrator | 2025-08-29 15:00:06 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 15:00:06.315987 | orchestrator | 2025-08-29 15:00:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:09.364144 | orchestrator | 2025-08-29 15:00:09 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:09.366340 | orchestrator | 2025-08-29 15:00:09 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 15:00:09.369507 | orchestrator | 2025-08-29 15:00:09 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 15:00:09.370102 | orchestrator | 2025-08-29 15:00:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:12.418518 | orchestrator | 2025-08-29 15:00:12 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:12.419747 | orchestrator | 2025-08-29 15:00:12 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 15:00:12.425514 | orchestrator | 2025-08-29 15:00:12 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state STARTED 2025-08-29 15:00:12.426395 | orchestrator | 2025-08-29 15:00:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:15.477172 | orchestrator | 2025-08-29 15:00:15 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:15.478574 | orchestrator | 2025-08-29 15:00:15 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 15:00:15.487391 | orchestrator | 2025-08-29 15:00:15 | INFO  | Task 5dcb4e15-8ed9-4f34-b558-751b1d874f50 is in state SUCCESS 2025-08-29 15:00:15.489693 | orchestrator | 2025-08-29 15:00:15.489724 | orchestrator | 2025-08-29 15:00:15.489729 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-08-29 15:00:15.489734 | orchestrator | 2025-08-29 15:00:15.489738 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 15:00:15.489743 | orchestrator | Friday 29 August 2025 14:47:09 +0000 (0:00:00.789) 0:00:00.789 ********* 2025-08-29 15:00:15.489749 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.489754 | orchestrator | 2025-08-29 15:00:15.489758 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 15:00:15.489762 | orchestrator | Friday 29 August 2025 14:47:10 +0000 (0:00:01.148) 0:00:01.938 ********* 2025-08-29 15:00:15.489766 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.489771 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.489792 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.489796 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.489800 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.489804 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.489808 | orchestrator | 2025-08-29 15:00:15.489812 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 15:00:15.489816 | orchestrator | Friday 29 August 2025 14:47:11 +0000 (0:00:01.461) 0:00:03.399 ********* 2025-08-29 15:00:15.489820 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.489824 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.489827 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.489831 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.489835 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.489839 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.489858 | orchestrator | 2025-08-29 15:00:15.489863 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 15:00:15.489866 | orchestrator | Friday 29 August 2025 14:47:12 +0000 (0:00:00.768) 0:00:04.167 ********* 2025-08-29 15:00:15.489871 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.489874 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.489879 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.489918 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.489926 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.489932 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.489939 | orchestrator | 2025-08-29 15:00:15.489945 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 15:00:15.489949 | orchestrator | Friday 29 August 2025 14:47:13 +0000 (0:00:00.972) 0:00:05.140 ********* 2025-08-29 15:00:15.489952 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.489956 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.489960 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.489964 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.489968 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.489972 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.489976 | orchestrator | 2025-08-29 15:00:15.489980 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 15:00:15.489984 | orchestrator | Friday 29 August 2025 14:47:14 +0000 (0:00:00.796) 0:00:05.936 ********* 2025-08-29 15:00:15.489987 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.489991 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.489995 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.489998 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.490002 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.490006 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.490009 | orchestrator | 2025-08-29 15:00:15.490052 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 15:00:15.490059 | orchestrator | Friday 29 August 2025 14:47:14 +0000 (0:00:00.479) 0:00:06.416 ********* 2025-08-29 15:00:15.490063 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.490067 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.490071 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.490075 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.490078 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.490082 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.490086 | orchestrator | 2025-08-29 15:00:15.490090 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 15:00:15.490094 | orchestrator | Friday 29 August 2025 14:47:15 +0000 (0:00:00.969) 0:00:07.385 ********* 2025-08-29 15:00:15.490098 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.490143 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.490147 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.490151 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.490155 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.490159 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.490162 | orchestrator | 2025-08-29 15:00:15.490166 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 15:00:15.490209 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:00.656) 0:00:08.042 ********* 2025-08-29 15:00:15.490218 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.490222 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.490226 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.490229 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.490233 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.490236 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.490240 | orchestrator | 2025-08-29 15:00:15.490244 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 15:00:15.490247 | orchestrator | Friday 29 August 2025 14:47:17 +0000 (0:00:00.926) 0:00:08.968 ********* 2025-08-29 15:00:15.490251 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:00:15.490255 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:00:15.490259 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:00:15.490263 | orchestrator | 2025-08-29 15:00:15.490266 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 15:00:15.490270 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:00.810) 0:00:09.779 ********* 2025-08-29 15:00:15.490274 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.490278 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.490281 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.490285 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.490289 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.490292 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.490296 | orchestrator | 2025-08-29 15:00:15.490308 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 15:00:15.490312 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:01.480) 0:00:11.259 ********* 2025-08-29 15:00:15.490317 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:00:15.490321 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:00:15.490326 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:00:15.490330 | orchestrator | 2025-08-29 15:00:15.490334 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 15:00:15.490339 | orchestrator | Friday 29 August 2025 14:47:22 +0000 (0:00:02.745) 0:00:14.004 ********* 2025-08-29 15:00:15.490343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:00:15.490347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:00:15.490352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:00:15.490356 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.490360 | orchestrator | 2025-08-29 15:00:15.490364 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 15:00:15.490369 | orchestrator | Friday 29 August 2025 14:47:22 +0000 (0:00:00.443) 0:00:14.448 ********* 2025-08-29 15:00:15.490375 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.490382 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.490387 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.490391 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.490399 | orchestrator | 2025-08-29 15:00:15.490406 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 15:00:15.490412 | orchestrator | Friday 29 August 2025 14:47:23 +0000 (0:00:00.894) 0:00:15.343 ********* 2025-08-29 15:00:15.490422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.490431 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.490442 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.490449 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.490457 | orchestrator | 2025-08-29 15:00:15.490461 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 15:00:15.490466 | orchestrator | Friday 29 August 2025 14:47:23 +0000 (0:00:00.398) 0:00:15.741 ********* 2025-08-29 15:00:15.490475 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 14:47:20.145119', 'end': '2025-08-29 14:47:20.416988', 'delta': '0:00:00.271869', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.490483 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 14:47:21.066658', 'end': '2025-08-29 14:47:21.324266', 'delta': '0:00:00.257608', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.490487 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 14:47:21.785896', 'end': '2025-08-29 14:47:22.068623', 'delta': '0:00:00.282727', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.490496 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.490501 | orchestrator | 2025-08-29 15:00:15.490505 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 15:00:15.490509 | orchestrator | Friday 29 August 2025 14:47:24 +0000 (0:00:00.321) 0:00:16.063 ********* 2025-08-29 15:00:15.490514 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.490518 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.490522 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.490527 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.490531 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.490535 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.490539 | orchestrator | 2025-08-29 15:00:15.490543 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 15:00:15.490548 | orchestrator | Friday 29 August 2025 14:47:25 +0000 (0:00:01.554) 0:00:17.618 ********* 2025-08-29 15:00:15.490552 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:15.490556 | orchestrator | 2025-08-29 15:00:15.490560 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 15:00:15.490565 | orchestrator | Friday 29 August 2025 14:47:26 +0000 (0:00:00.831) 0:00:18.449 ********* 2025-08-29 15:00:15.490569 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.490573 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.490577 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.490581 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.490585 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.490589 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.490593 | orchestrator | 2025-08-29 15:00:15.490598 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 15:00:15.490656 | orchestrator | Friday 29 August 2025 14:47:28 +0000 (0:00:02.309) 0:00:20.759 ********* 2025-08-29 15:00:15.490663 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.490670 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.490677 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.490687 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.490694 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.490700 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.490706 | orchestrator | 2025-08-29 15:00:15.490712 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:00:15.490734 | orchestrator | Friday 29 August 2025 14:47:31 +0000 (0:00:02.374) 0:00:23.134 ********* 2025-08-29 15:00:15.490740 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.490746 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.490770 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.490777 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.490783 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.490789 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.490795 | orchestrator | 2025-08-29 15:00:15.490802 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 15:00:15.490806 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:02.063) 0:00:25.197 ********* 2025-08-29 15:00:15.490810 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.490814 | orchestrator | 2025-08-29 15:00:15.490817 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 15:00:15.490821 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:00.207) 0:00:25.404 ********* 2025-08-29 15:00:15.490825 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.490829 | orchestrator | 2025-08-29 15:00:15.490832 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:00:15.490836 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:00.357) 0:00:25.761 ********* 2025-08-29 15:00:15.490840 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.490901 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.490916 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.490922 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.490929 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.490933 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.490937 | orchestrator | 2025-08-29 15:00:15.490945 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 15:00:15.490984 | orchestrator | Friday 29 August 2025 14:47:35 +0000 (0:00:01.159) 0:00:26.921 ********* 2025-08-29 15:00:15.490988 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.490992 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.490996 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.491000 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.491003 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.491007 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.491011 | orchestrator | 2025-08-29 15:00:15.491014 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 15:00:15.491018 | orchestrator | Friday 29 August 2025 14:47:36 +0000 (0:00:01.325) 0:00:28.247 ********* 2025-08-29 15:00:15.491022 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.491026 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.491029 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.491033 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.491037 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.491040 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.491044 | orchestrator | 2025-08-29 15:00:15.491060 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 15:00:15.491064 | orchestrator | Friday 29 August 2025 14:47:37 +0000 (0:00:01.332) 0:00:29.580 ********* 2025-08-29 15:00:15.491068 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.491072 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.491075 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.491079 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.491083 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.491086 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.491090 | orchestrator | 2025-08-29 15:00:15.491104 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 15:00:15.491109 | orchestrator | Friday 29 August 2025 14:47:38 +0000 (0:00:01.090) 0:00:30.670 ********* 2025-08-29 15:00:15.491113 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.491116 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.491120 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.491124 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.491127 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.491131 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.491134 | orchestrator | 2025-08-29 15:00:15.491138 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 15:00:15.491142 | orchestrator | Friday 29 August 2025 14:47:39 +0000 (0:00:00.622) 0:00:31.293 ********* 2025-08-29 15:00:15.491146 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.491149 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.491153 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.491157 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.491161 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.491164 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.491170 | orchestrator | 2025-08-29 15:00:15.491194 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 15:00:15.491202 | orchestrator | Friday 29 August 2025 14:47:40 +0000 (0:00:00.798) 0:00:32.092 ********* 2025-08-29 15:00:15.491208 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.491215 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.491219 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.491222 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.491226 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.491234 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.491238 | orchestrator | 2025-08-29 15:00:15.491242 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 15:00:15.491246 | orchestrator | Friday 29 August 2025 14:47:41 +0000 (0:00:00.786) 0:00:32.878 ********* 2025-08-29 15:00:15.491282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73f6d854--e6b6--54de--b399--c089d2858352-osd--block--73f6d854--e6b6--54de--b399--c089d2858352', 'dm-uuid-LVM-TEALsbrfrE7SLR1OalMwM0X8nCCTvLnFVAaUKmbkx6MVUCEiqPR6jSIbkRHXIqFa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b0db6b07--6be9--5d1b--9597--ea455233b3a1-osd--block--b0db6b07--6be9--5d1b--9597--ea455233b3a1', 'dm-uuid-LVM-V0mIVikotbLWfFY3h0eQCXH0vRmpIDcsVOLNkcVPBXWn7BukTDvVp0bBj80gOObg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.491347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--73f6d854--e6b6--54de--b399--c089d2858352-osd--block--73f6d854--e6b6--54de--b399--c089d2858352'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V9ovz7-GIq2-tF1d-Owus-2UHr-v8sj-1Fxx35', 'scsi-0QEMU_QEMU_HARDDISK_b5eca971-d360-4d10-a7ea-637f4b5fbeee', 'scsi-SQEMU_QEMU_HARDDISK_b5eca971-d360-4d10-a7ea-637f4b5fbeee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.491360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b0db6b07--6be9--5d1b--9597--ea455233b3a1-osd--block--b0db6b07--6be9--5d1b--9597--ea455233b3a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tds9kX-NVeh-oQdQ-yJw0-iJwc-Se6q-lU97tY', 'scsi-0QEMU_QEMU_HARDDISK_8530debd-d017-4f26-8837-9c6ea90d3888', 'scsi-SQEMU_QEMU_HARDDISK_8530debd-d017-4f26-8837-9c6ea90d3888'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.491367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994268b6-638b-49ae-9337-a5a883f2caf6', 'scsi-SQEMU_QEMU_HARDDISK_994268b6-638b-49ae-9337-a5a883f2caf6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.491372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8955e74f--f88a--5c8e--a869--5f490c143acc-osd--block--8955e74f--f88a--5c8e--a869--5f490c143acc', 'dm-uuid-LVM-GL8RBRk7JsbOtuMXFSkoGw73fN6hxG0ak4TjArrEISI2heGBlA4cRzgqY9nPbFnR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.491387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76bc2ac4--c5cd--591d--a103--fddbd09e4373-osd--block--76bc2ac4--c5cd--591d--a103--fddbd09e4373', 'dm-uuid-LVM-P1Vrtaz3bb7hJ1aKWnFLuz2LxKSraMY7EYtxcho3wZvorudivDul03HJRET9qYqN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491422 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.491428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281-osd--block--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281', 'dm-uuid-LVM-Eq4cq90aBFRmjdeLun4eAe2ZEskDWwIf24G83ImBo8oYKublABAfbDRepl2GsjEU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde-osd--block--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde', 'dm-uuid-LVM-iKyJOsH78rtvgbLs6UfPuiIqW1omUJULgefd1komE3xEnVuXzInDXRsz03pmssLm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part1', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part14', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part15', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part16', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.491967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.491985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8955e74f--f88a--5c8e--a869--5f490c143acc-osd--block--8955e74f--f88a--5c8e--a869--5f490c143acc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tAXKcI-A8hy-IioI-LWvB-km1w-baTb-bsyZta', 'scsi-0QEMU_QEMU_HARDDISK_cfe2d7b1-468c-4925-a2ef-e57e3e9904a6', 'scsi-SQEMU_QEMU_HARDDISK_cfe2d7b1-468c-4925-a2ef-e57e3e9904a6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.491992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--76bc2ac4--c5cd--591d--a103--fddbd09e4373-osd--block--76bc2ac4--c5cd--591d--a103--fddbd09e4373'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xBwyMF-FYxc-04qI-0fEU-AYWj-zcBC-304X7g', 'scsi-0QEMU_QEMU_HARDDISK_2af32ac1-7951-4112-a429-d0343cb67ad9', 'scsi-SQEMU_QEMU_HARDDISK_2af32ac1-7951-4112-a429-d0343cb67ad9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.491997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4cd25e5-e70d-493f-a3e8-ae6027cdfc98', 'scsi-SQEMU_QEMU_HARDDISK_e4cd25e5-e70d-493f-a3e8-ae6027cdfc98'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.492008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-06-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.492012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492024 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.492028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part1', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part14', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part15', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part16', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.492039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281-osd--block--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YYXEW6-66WM-VdEs-St5p-kHqo-tmbV-ohONUP', 'scsi-0QEMU_QEMU_HARDDISK_e2b81981-b087-421f-a1f1-ab20210f7cdd', 'scsi-SQEMU_QEMU_HARDDISK_e2b81981-b087-421f-a1f1-ab20210f7cdd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.492086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde-osd--block--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HIcRjp-yJfB-W5bx-nvaq-qvXo-41yG-dor8I7', 'scsi-0QEMU_QEMU_HARDDISK_4ecbb96d-8085-4234-aac0-aef459b35ca9', 'scsi-SQEMU_QEMU_HARDDISK_4ecbb96d-8085-4234-aac0-aef459b35ca9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.492129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d88be871-1de9-4c4e-96cc-2f99c6f9bcd6', 'scsi-SQEMU_QEMU_HARDDISK_d88be871-1de9-4c4e-96cc-2f99c6f9bcd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.492153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-06-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.492160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820', 'scsi-SQEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part1', 'scsi-SQEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part14', 'scsi-SQEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part15', 'scsi-SQEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part16', 'scsi-SQEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.492168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-06-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.492172 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.492176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae', 'scsi-SQEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part1', 'scsi-SQEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part14', 'scsi-SQEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part15', 'scsi-SQEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part16', 'scsi-SQEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.492230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.492234 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.492238 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.492241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:00:15.492344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf', 'scsi-SQEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.492352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-06-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:00:15.492359 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.492363 | orchestrator | 2025-08-29 15:00:15.492367 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 15:00:15.492371 | orchestrator | Friday 29 August 2025 14:47:43 +0000 (0:00:02.183) 0:00:35.062 ********* 2025-08-29 15:00:15.492375 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73f6d854--e6b6--54de--b399--c089d2858352-osd--block--73f6d854--e6b6--54de--b399--c089d2858352', 'dm-uuid-LVM-TEALsbrfrE7SLR1OalMwM0X8nCCTvLnFVAaUKmbkx6MVUCEiqPR6jSIbkRHXIqFa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492380 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8955e74f--f88a--5c8e--a869--5f490c143acc-osd--block--8955e74f--f88a--5c8e--a869--5f490c143acc', 'dm-uuid-LVM-GL8RBRk7JsbOtuMXFSkoGw73fN6hxG0ak4TjArrEISI2heGBlA4cRzgqY9nPbFnR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492384 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b0db6b07--6be9--5d1b--9597--ea455233b3a1-osd--block--b0db6b07--6be9--5d1b--9597--ea455233b3a1', 'dm-uuid-LVM-V0mIVikotbLWfFY3h0eQCXH0vRmpIDcsVOLNkcVPBXWn7BukTDvVp0bBj80gOObg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492390 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76bc2ac4--c5cd--591d--a103--fddbd09e4373-osd--block--76bc2ac4--c5cd--591d--a103--fddbd09e4373', 'dm-uuid-LVM-P1Vrtaz3bb7hJ1aKWnFLuz2LxKSraMY7EYtxcho3wZvorudivDul03HJRET9qYqN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492396 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492416 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492421 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492425 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281-osd--block--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281', 'dm-uuid-LVM-Eq4cq90aBFRmjdeLun4eAe2ZEskDWwIf24G83ImBo8oYKublABAfbDRepl2GsjEU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492431 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde-osd--block--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde', 'dm-uuid-LVM-iKyJOsH78rtvgbLs6UfPuiIqW1omUJULgefd1komE3xEnVuXzInDXRsz03pmssLm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492435 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492439 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492449 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492453 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492457 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492461 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492468 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492472 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492482 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492487 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492522 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492536 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part1', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part14', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part15', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part16', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492550 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492555 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492560 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281-osd--block--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YYXEW6-66WM-VdEs-St5p-kHqo-tmbV-ohONUP', 'scsi-0QEMU_QEMU_HARDDISK_e2b81981-b087-421f-a1f1-ab20210f7cdd', 'scsi-SQEMU_QEMU_HARDDISK_e2b81981-b087-421f-a1f1-ab20210f7cdd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492566 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492573 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492577 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde-osd--block--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HIcRjp-yJfB-W5bx-nvaq-qvXo-41yG-dor8I7', 'scsi-0QEMU_QEMU_HARDDISK_4ecbb96d-8085-4234-aac0-aef459b35ca9', 'scsi-SQEMU_QEMU_HARDDISK_4ecbb96d-8085-4234-aac0-aef459b35ca9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492590 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d88be871-1de9-4c4e-96cc-2f99c6f9bcd6', 'scsi-SQEMU_QEMU_HARDDISK_d88be871-1de9-4c4e-96cc-2f99c6f9bcd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492594 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492599 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492604 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492612 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-06-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492623 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part1', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part14', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part15', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part16', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492628 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492635 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8955e74f--f88a--5c8e--a869--5f490c143acc-osd--block--8955e74f--f88a--5c8e--a869--5f490c143acc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tAXKcI-A8hy-IioI-LWvB-km1w-baTb-bsyZta', 'scsi-0QEMU_QEMU_HARDDISK_cfe2d7b1-468c-4925-a2ef-e57e3e9904a6', 'scsi-SQEMU_QEMU_HARDDISK_cfe2d7b1-468c-4925-a2ef-e57e3e9904a6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492650 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--76bc2ac4--c5cd--591d--a103--fddbd09e4373-osd--block--76bc2ac4--c5cd--591d--a103--fddbd09e4373'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xBwyMF-FYxc-04qI-0fEU-AYWj-zcBC-304X7g', 'scsi-0QEMU_QEMU_HARDDISK_2af32ac1-7951-4112-a429-d0343cb67ad9', 'scsi-SQEMU_QEMU_HARDDISK_2af32ac1-7951-4112-a429-d0343cb67ad9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492660 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492664 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4cd25e5-e70d-493f-a3e8-ae6027cdfc98', 'scsi-SQEMU_QEMU_HARDDISK_e4cd25e5-e70d-493f-a3e8-ae6027cdfc98'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492671 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492679 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492693 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492700 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-06-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--73f6d854--e6b6--54de--b399--c089d2858352-osd--block--73f6d854--e6b6--54de--b399--c089d2858352'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V9ovz7-GIq2-tF1d-Owus-2UHr-v8sj-1Fxx35', 'scsi-0QEMU_QEMU_HARDDISK_b5eca971-d360-4d10-a7ea-637f4b5fbeee', 'scsi-SQEMU_QEMU_HARDDISK_b5eca971-d360-4d10-a7ea-637f4b5fbeee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492980 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492987 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.492993 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b0db6b07--6be9--5d1b--9597--ea455233b3a1-osd--block--b0db6b07--6be9--5d1b--9597--ea455233b3a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tds9kX-NVeh-oQdQ-yJw0-iJwc-Se6q-lU97tY', 'scsi-0QEMU_QEMU_HARDDISK_8530debd-d017-4f26-8837-9c6ea90d3888', 'scsi-SQEMU_QEMU_HARDDISK_8530debd-d017-4f26-8837-9c6ea90d3888'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493005 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493019 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994268b6-638b-49ae-9337-a5a883f2caf6', 'scsi-SQEMU_QEMU_HARDDISK_994268b6-638b-49ae-9337-a5a883f2caf6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493039 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493209 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.493215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493224 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820', 'scsi-SQEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part1', 'scsi-SQEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part14', 'scsi-SQEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part15', 'scsi-SQEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part16', 'scsi-SQEMU_QEMU_HARDDISK_ef226c67-7d86-4d5e-a378-f55055673820-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493247 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-06-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493252 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493256 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493260 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493264 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493274 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493278 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493292 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493297 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493301 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.493308 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae', 'scsi-SQEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part1', 'scsi-SQEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part14', 'scsi-SQEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part15', 'scsi-SQEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part16', 'scsi-SQEMU_QEMU_HARDDISK_f87c7dfb-0134-4ec7-a213-44c56380f9ae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493318 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.493322 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493336 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493341 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.493345 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.493349 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493353 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493393 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493400 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493404 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493420 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493425 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493433 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf', 'scsi-SQEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_5911b2dd-dfb9-4225-b72b-248290496bdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493441 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-06-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:00:15.493445 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.493449 | orchestrator | 2025-08-29 15:00:15.493453 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 15:00:15.493457 | orchestrator | Friday 29 August 2025 14:47:44 +0000 (0:00:01.539) 0:00:36.602 ********* 2025-08-29 15:00:15.493470 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.493475 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.493479 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.493482 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.493486 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.493490 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.493494 | orchestrator | 2025-08-29 15:00:15.493497 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 15:00:15.493501 | orchestrator | Friday 29 August 2025 14:47:46 +0000 (0:00:01.267) 0:00:37.870 ********* 2025-08-29 15:00:15.493505 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.493509 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.493512 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.493516 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.493520 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.493523 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.493527 | orchestrator | 2025-08-29 15:00:15.493531 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:00:15.493535 | orchestrator | Friday 29 August 2025 14:47:47 +0000 (0:00:01.577) 0:00:39.447 ********* 2025-08-29 15:00:15.493539 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.493542 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.493550 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.493554 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.493558 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.493561 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.493565 | orchestrator | 2025-08-29 15:00:15.493569 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:00:15.493573 | orchestrator | Friday 29 August 2025 14:47:48 +0000 (0:00:01.000) 0:00:40.448 ********* 2025-08-29 15:00:15.493576 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.493580 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.493584 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.493587 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.493591 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.493595 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.493598 | orchestrator | 2025-08-29 15:00:15.493602 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:00:15.493606 | orchestrator | Friday 29 August 2025 14:47:49 +0000 (0:00:00.562) 0:00:41.010 ********* 2025-08-29 15:00:15.493610 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.493613 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.493617 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.493621 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.493624 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.493628 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.493632 | orchestrator | 2025-08-29 15:00:15.493636 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:00:15.493640 | orchestrator | Friday 29 August 2025 14:47:50 +0000 (0:00:01.187) 0:00:42.197 ********* 2025-08-29 15:00:15.493643 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.493647 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.493651 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.493654 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.493658 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.493662 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.493665 | orchestrator | 2025-08-29 15:00:15.493704 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 15:00:15.493709 | orchestrator | Friday 29 August 2025 14:47:51 +0000 (0:00:01.445) 0:00:43.642 ********* 2025-08-29 15:00:15.493713 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 15:00:15.493717 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 15:00:15.493721 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 15:00:15.493725 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 15:00:15.493729 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 15:00:15.493733 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 15:00:15.493736 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:00:15.493740 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 15:00:15.493747 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-08-29 15:00:15.493942 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 15:00:15.493948 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 15:00:15.493952 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-08-29 15:00:15.493956 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 15:00:15.493960 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 15:00:15.493964 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-08-29 15:00:15.493969 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-08-29 15:00:15.493975 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-08-29 15:00:15.493981 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-08-29 15:00:15.493987 | orchestrator | 2025-08-29 15:00:15.493993 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 15:00:15.494005 | orchestrator | Friday 29 August 2025 14:47:55 +0000 (0:00:04.009) 0:00:47.652 ********* 2025-08-29 15:00:15.494012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:00:15.494058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:00:15.494063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:00:15.494067 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.494070 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 15:00:15.494074 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 15:00:15.494078 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 15:00:15.494082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:00:15.494086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:00:15.494090 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 15:00:15.494111 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:00:15.494115 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 15:00:15.494119 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 15:00:15.494123 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.494126 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 15:00:15.494130 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 15:00:15.494134 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 15:00:15.494138 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.494142 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.494146 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.494150 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 15:00:15.494153 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 15:00:15.494157 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 15:00:15.494161 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.494164 | orchestrator | 2025-08-29 15:00:15.494168 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 15:00:15.494172 | orchestrator | Friday 29 August 2025 14:47:57 +0000 (0:00:01.174) 0:00:48.826 ********* 2025-08-29 15:00:15.494176 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.494180 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.494183 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.494188 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.494192 | orchestrator | 2025-08-29 15:00:15.494196 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 15:00:15.494200 | orchestrator | Friday 29 August 2025 14:47:58 +0000 (0:00:01.647) 0:00:50.474 ********* 2025-08-29 15:00:15.494204 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.494208 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.494211 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.494215 | orchestrator | 2025-08-29 15:00:15.494219 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 15:00:15.494222 | orchestrator | Friday 29 August 2025 14:47:59 +0000 (0:00:00.448) 0:00:50.923 ********* 2025-08-29 15:00:15.494226 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.494230 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.494233 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.494237 | orchestrator | 2025-08-29 15:00:15.494241 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 15:00:15.494245 | orchestrator | Friday 29 August 2025 14:47:59 +0000 (0:00:00.489) 0:00:51.413 ********* 2025-08-29 15:00:15.494249 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.494257 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.494260 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.494264 | orchestrator | 2025-08-29 15:00:15.494268 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 15:00:15.494271 | orchestrator | Friday 29 August 2025 14:48:00 +0000 (0:00:00.780) 0:00:52.194 ********* 2025-08-29 15:00:15.494275 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.494279 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.494283 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.494287 | orchestrator | 2025-08-29 15:00:15.494290 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 15:00:15.494294 | orchestrator | Friday 29 August 2025 14:48:01 +0000 (0:00:00.755) 0:00:52.949 ********* 2025-08-29 15:00:15.494298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.494301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.494305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.494309 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.494313 | orchestrator | 2025-08-29 15:00:15.494316 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 15:00:15.494324 | orchestrator | Friday 29 August 2025 14:48:01 +0000 (0:00:00.517) 0:00:53.467 ********* 2025-08-29 15:00:15.494327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.494331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.494335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.494339 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.494342 | orchestrator | 2025-08-29 15:00:15.494346 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 15:00:15.494350 | orchestrator | Friday 29 August 2025 14:48:02 +0000 (0:00:00.485) 0:00:53.953 ********* 2025-08-29 15:00:15.494353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.494357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.494361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.494364 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.494368 | orchestrator | 2025-08-29 15:00:15.494372 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 15:00:15.494376 | orchestrator | Friday 29 August 2025 14:48:02 +0000 (0:00:00.540) 0:00:54.494 ********* 2025-08-29 15:00:15.494379 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.494383 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.494387 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.494390 | orchestrator | 2025-08-29 15:00:15.494394 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 15:00:15.494398 | orchestrator | Friday 29 August 2025 14:48:03 +0000 (0:00:00.509) 0:00:55.003 ********* 2025-08-29 15:00:15.494402 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:00:15.494405 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:00:15.494409 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 15:00:15.494413 | orchestrator | 2025-08-29 15:00:15.494428 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 15:00:15.494432 | orchestrator | Friday 29 August 2025 14:48:04 +0000 (0:00:01.085) 0:00:56.089 ********* 2025-08-29 15:00:15.494436 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:00:15.494440 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:00:15.494444 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:00:15.494448 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 15:00:15.494451 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:00:15.494458 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:00:15.494462 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:00:15.494466 | orchestrator | 2025-08-29 15:00:15.494470 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 15:00:15.494473 | orchestrator | Friday 29 August 2025 14:48:05 +0000 (0:00:00.925) 0:00:57.015 ********* 2025-08-29 15:00:15.494477 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:00:15.494481 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:00:15.494484 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:00:15.494490 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 15:00:15.494496 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:00:15.494502 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:00:15.494508 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:00:15.494515 | orchestrator | 2025-08-29 15:00:15.494521 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:15.494528 | orchestrator | Friday 29 August 2025 14:48:07 +0000 (0:00:01.857) 0:00:58.873 ********* 2025-08-29 15:00:15.494536 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.494542 | orchestrator | 2025-08-29 15:00:15.494546 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:15.494550 | orchestrator | Friday 29 August 2025 14:48:08 +0000 (0:00:01.362) 0:01:00.235 ********* 2025-08-29 15:00:15.494553 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.494557 | orchestrator | 2025-08-29 15:00:15.494561 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:15.494565 | orchestrator | Friday 29 August 2025 14:48:09 +0000 (0:00:01.317) 0:01:01.552 ********* 2025-08-29 15:00:15.494569 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.494573 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.494578 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.494584 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.494590 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.494596 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.494602 | orchestrator | 2025-08-29 15:00:15.494609 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:15.494615 | orchestrator | Friday 29 August 2025 14:48:11 +0000 (0:00:01.564) 0:01:03.117 ********* 2025-08-29 15:00:15.494622 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.494628 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.494638 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.494645 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.494651 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.494658 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.494664 | orchestrator | 2025-08-29 15:00:15.494671 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:15.494677 | orchestrator | Friday 29 August 2025 14:48:12 +0000 (0:00:01.119) 0:01:04.237 ********* 2025-08-29 15:00:15.494684 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.494691 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.494697 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.494704 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.494710 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.494716 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.494726 | orchestrator | 2025-08-29 15:00:15.494732 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:15.494738 | orchestrator | Friday 29 August 2025 14:48:13 +0000 (0:00:01.054) 0:01:05.292 ********* 2025-08-29 15:00:15.494744 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.494749 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.494755 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.494761 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.494766 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.494772 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.494778 | orchestrator | 2025-08-29 15:00:15.494784 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:15.494790 | orchestrator | Friday 29 August 2025 14:48:14 +0000 (0:00:01.407) 0:01:06.699 ********* 2025-08-29 15:00:15.494796 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.494802 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.494809 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.494815 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.494821 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.494827 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.494833 | orchestrator | 2025-08-29 15:00:15.494839 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:15.494880 | orchestrator | Friday 29 August 2025 14:48:16 +0000 (0:00:01.985) 0:01:08.684 ********* 2025-08-29 15:00:15.494885 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.494889 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.494893 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.494896 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.494900 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.494904 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.494908 | orchestrator | 2025-08-29 15:00:15.494911 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:15.494915 | orchestrator | Friday 29 August 2025 14:48:18 +0000 (0:00:01.450) 0:01:10.135 ********* 2025-08-29 15:00:15.494919 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.494923 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.494926 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.494930 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.494934 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.494937 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.494941 | orchestrator | 2025-08-29 15:00:15.494945 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:15.494949 | orchestrator | Friday 29 August 2025 14:48:19 +0000 (0:00:00.636) 0:01:10.772 ********* 2025-08-29 15:00:15.494952 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.494956 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.494960 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.494964 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.494967 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.494971 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.494975 | orchestrator | 2025-08-29 15:00:15.494979 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:15.494982 | orchestrator | Friday 29 August 2025 14:48:21 +0000 (0:00:02.037) 0:01:12.809 ********* 2025-08-29 15:00:15.494986 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.494990 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.494994 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.494998 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.495001 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.495005 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.495009 | orchestrator | 2025-08-29 15:00:15.495013 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:15.495016 | orchestrator | Friday 29 August 2025 14:48:22 +0000 (0:00:01.593) 0:01:14.403 ********* 2025-08-29 15:00:15.495024 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.495028 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.495032 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.495036 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.495039 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.495043 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.495047 | orchestrator | 2025-08-29 15:00:15.495051 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:15.495054 | orchestrator | Friday 29 August 2025 14:48:23 +0000 (0:00:01.301) 0:01:15.704 ********* 2025-08-29 15:00:15.495058 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.495062 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.495066 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.495069 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.495073 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.495077 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.495080 | orchestrator | 2025-08-29 15:00:15.495084 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:15.495088 | orchestrator | Friday 29 August 2025 14:48:24 +0000 (0:00:00.685) 0:01:16.389 ********* 2025-08-29 15:00:15.495092 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.495095 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.495099 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.495103 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.495106 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.495110 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.495114 | orchestrator | 2025-08-29 15:00:15.495118 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:15.495121 | orchestrator | Friday 29 August 2025 14:48:25 +0000 (0:00:00.970) 0:01:17.360 ********* 2025-08-29 15:00:15.495125 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.495129 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.495136 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.495139 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.495143 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.495147 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.495150 | orchestrator | 2025-08-29 15:00:15.495154 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:15.495158 | orchestrator | Friday 29 August 2025 14:48:26 +0000 (0:00:01.011) 0:01:18.372 ********* 2025-08-29 15:00:15.495162 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.495165 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.495169 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.495173 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.495176 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.495180 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.495184 | orchestrator | 2025-08-29 15:00:15.495188 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:15.495191 | orchestrator | Friday 29 August 2025 14:48:27 +0000 (0:00:01.085) 0:01:19.457 ********* 2025-08-29 15:00:15.495195 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.495199 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.495202 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.495206 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.495210 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.495213 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.495217 | orchestrator | 2025-08-29 15:00:15.495221 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:15.495225 | orchestrator | Friday 29 August 2025 14:48:28 +0000 (0:00:01.103) 0:01:20.561 ********* 2025-08-29 15:00:15.495228 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.495232 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.495236 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.495242 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.495255 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.495262 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.495268 | orchestrator | 2025-08-29 15:00:15.495290 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:15.495296 | orchestrator | Friday 29 August 2025 14:48:29 +0000 (0:00:00.859) 0:01:21.421 ********* 2025-08-29 15:00:15.495303 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.495309 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.495313 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.495317 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.495321 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.495324 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.495328 | orchestrator | 2025-08-29 15:00:15.495332 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:15.495335 | orchestrator | Friday 29 August 2025 14:48:30 +0000 (0:00:00.655) 0:01:22.077 ********* 2025-08-29 15:00:15.495339 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.495343 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.495346 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.495350 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.495354 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.495357 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.495361 | orchestrator | 2025-08-29 15:00:15.495365 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:15.495369 | orchestrator | Friday 29 August 2025 14:48:31 +0000 (0:00:00.976) 0:01:23.054 ********* 2025-08-29 15:00:15.495372 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.495376 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.495380 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.495383 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.495387 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.495390 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.495394 | orchestrator | 2025-08-29 15:00:15.495398 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-08-29 15:00:15.495402 | orchestrator | Friday 29 August 2025 14:48:32 +0000 (0:00:01.480) 0:01:24.534 ********* 2025-08-29 15:00:15.495405 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.495409 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.495413 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.495416 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.495420 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.495424 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.495427 | orchestrator | 2025-08-29 15:00:15.495431 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-08-29 15:00:15.495435 | orchestrator | Friday 29 August 2025 14:48:34 +0000 (0:00:01.510) 0:01:26.045 ********* 2025-08-29 15:00:15.495439 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.495442 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.495446 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.495449 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.495453 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.495457 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.495461 | orchestrator | 2025-08-29 15:00:15.495464 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-08-29 15:00:15.495468 | orchestrator | Friday 29 August 2025 14:48:36 +0000 (0:00:02.249) 0:01:28.294 ********* 2025-08-29 15:00:15.495472 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.495476 | orchestrator | 2025-08-29 15:00:15.495480 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-08-29 15:00:15.495484 | orchestrator | Friday 29 August 2025 14:48:37 +0000 (0:00:01.118) 0:01:29.412 ********* 2025-08-29 15:00:15.495487 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.495491 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.495499 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.495503 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.495507 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.495510 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.495514 | orchestrator | 2025-08-29 15:00:15.495518 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-08-29 15:00:15.495522 | orchestrator | Friday 29 August 2025 14:48:38 +0000 (0:00:00.509) 0:01:29.922 ********* 2025-08-29 15:00:15.495525 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.495532 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.495536 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.495539 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.495543 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.495547 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.495551 | orchestrator | 2025-08-29 15:00:15.495554 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-08-29 15:00:15.495558 | orchestrator | Friday 29 August 2025 14:48:38 +0000 (0:00:00.686) 0:01:30.608 ********* 2025-08-29 15:00:15.495562 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:00:15.495566 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:00:15.495569 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:00:15.495573 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:00:15.495577 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:00:15.495580 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:00:15.495584 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:00:15.495588 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:00:15.495592 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:00:15.495596 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:00:15.495599 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:00:15.495615 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:00:15.495620 | orchestrator | 2025-08-29 15:00:15.495624 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-08-29 15:00:15.495627 | orchestrator | Friday 29 August 2025 14:48:40 +0000 (0:00:01.269) 0:01:31.877 ********* 2025-08-29 15:00:15.495631 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.495635 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.495639 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.495642 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.495646 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.495650 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.495653 | orchestrator | 2025-08-29 15:00:15.495657 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-08-29 15:00:15.495661 | orchestrator | Friday 29 August 2025 14:48:41 +0000 (0:00:01.052) 0:01:32.930 ********* 2025-08-29 15:00:15.495665 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.495668 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.495672 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.495676 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.495679 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.495683 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.495687 | orchestrator | 2025-08-29 15:00:15.495690 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-08-29 15:00:15.495694 | orchestrator | Friday 29 August 2025 14:48:41 +0000 (0:00:00.546) 0:01:33.476 ********* 2025-08-29 15:00:15.495704 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.495708 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.495712 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.495715 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.495719 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.495723 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.495726 | orchestrator | 2025-08-29 15:00:15.495730 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-08-29 15:00:15.495734 | orchestrator | Friday 29 August 2025 14:48:42 +0000 (0:00:00.721) 0:01:34.197 ********* 2025-08-29 15:00:15.495738 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.495741 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.495745 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.495748 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.495752 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.495756 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.495759 | orchestrator | 2025-08-29 15:00:15.495763 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-08-29 15:00:15.495767 | orchestrator | Friday 29 August 2025 14:48:43 +0000 (0:00:00.574) 0:01:34.772 ********* 2025-08-29 15:00:15.495771 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.495775 | orchestrator | 2025-08-29 15:00:15.495779 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-08-29 15:00:15.495783 | orchestrator | Friday 29 August 2025 14:48:44 +0000 (0:00:01.283) 0:01:36.056 ********* 2025-08-29 15:00:15.495786 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.495790 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.495794 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.495797 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.495802 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.495808 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.495814 | orchestrator | 2025-08-29 15:00:15.495820 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-08-29 15:00:15.495826 | orchestrator | Friday 29 August 2025 14:51:05 +0000 (0:02:21.518) 0:03:57.575 ********* 2025-08-29 15:00:15.495832 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:00:15.495839 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:00:15.495859 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:00:15.495870 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.495874 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:00:15.495878 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:00:15.495882 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:00:15.495885 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.495889 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:00:15.495893 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:00:15.495897 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:00:15.495900 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.495904 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:00:15.495908 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:00:15.495912 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:00:15.495915 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.495919 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:00:15.495928 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:00:15.495931 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:00:15.495935 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.495939 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:00:15.495956 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:00:15.495961 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:00:15.495964 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.495968 | orchestrator | 2025-08-29 15:00:15.495972 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-08-29 15:00:15.495976 | orchestrator | Friday 29 August 2025 14:51:06 +0000 (0:00:01.104) 0:03:58.679 ********* 2025-08-29 15:00:15.495979 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.495983 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.495987 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.495990 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.495994 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.495998 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.496002 | orchestrator | 2025-08-29 15:00:15.496005 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-08-29 15:00:15.496009 | orchestrator | Friday 29 August 2025 14:51:08 +0000 (0:00:01.138) 0:03:59.818 ********* 2025-08-29 15:00:15.496013 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.496016 | orchestrator | 2025-08-29 15:00:15.496020 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-08-29 15:00:15.496024 | orchestrator | Friday 29 August 2025 14:51:08 +0000 (0:00:00.203) 0:04:00.022 ********* 2025-08-29 15:00:15.496028 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.496031 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.496035 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.496039 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.496042 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.496046 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.496050 | orchestrator | 2025-08-29 15:00:15.496054 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-08-29 15:00:15.496057 | orchestrator | Friday 29 August 2025 14:51:09 +0000 (0:00:00.798) 0:04:00.820 ********* 2025-08-29 15:00:15.496061 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.496065 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.496068 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.496072 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.496076 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.496080 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.496083 | orchestrator | 2025-08-29 15:00:15.496087 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-08-29 15:00:15.496091 | orchestrator | Friday 29 August 2025 14:51:09 +0000 (0:00:00.924) 0:04:01.745 ********* 2025-08-29 15:00:15.496094 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.496098 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.496102 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.496105 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.496109 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.496113 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.496116 | orchestrator | 2025-08-29 15:00:15.496120 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-08-29 15:00:15.496124 | orchestrator | Friday 29 August 2025 14:51:10 +0000 (0:00:00.748) 0:04:02.494 ********* 2025-08-29 15:00:15.496128 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.496131 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.496135 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.496142 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.496146 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.496150 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.496153 | orchestrator | 2025-08-29 15:00:15.496157 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-08-29 15:00:15.496161 | orchestrator | Friday 29 August 2025 14:51:13 +0000 (0:00:02.828) 0:04:05.322 ********* 2025-08-29 15:00:15.496165 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.496169 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.496172 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.496176 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.496180 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.496183 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.496187 | orchestrator | 2025-08-29 15:00:15.496191 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-08-29 15:00:15.496194 | orchestrator | Friday 29 August 2025 14:51:14 +0000 (0:00:00.760) 0:04:06.082 ********* 2025-08-29 15:00:15.496212 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.496217 | orchestrator | 2025-08-29 15:00:15.496221 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-08-29 15:00:15.496225 | orchestrator | Friday 29 August 2025 14:51:15 +0000 (0:00:01.494) 0:04:07.576 ********* 2025-08-29 15:00:15.496229 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.496232 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.496236 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.496240 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.496244 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.496248 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.496251 | orchestrator | 2025-08-29 15:00:15.496255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-08-29 15:00:15.496259 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:00.964) 0:04:08.541 ********* 2025-08-29 15:00:15.496263 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.496266 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.496270 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.496274 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.496278 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.496281 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.496285 | orchestrator | 2025-08-29 15:00:15.496289 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-08-29 15:00:15.496293 | orchestrator | Friday 29 August 2025 14:51:17 +0000 (0:00:00.882) 0:04:09.424 ********* 2025-08-29 15:00:15.496296 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.496300 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.496304 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.496307 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.496311 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.496326 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.496330 | orchestrator | 2025-08-29 15:00:15.496334 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-08-29 15:00:15.496338 | orchestrator | Friday 29 August 2025 14:51:18 +0000 (0:00:00.937) 0:04:10.362 ********* 2025-08-29 15:00:15.496341 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.496345 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.496349 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.496352 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.496356 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.496360 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.496363 | orchestrator | 2025-08-29 15:00:15.496367 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-08-29 15:00:15.496371 | orchestrator | Friday 29 August 2025 14:51:19 +0000 (0:00:01.379) 0:04:11.742 ********* 2025-08-29 15:00:15.496378 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.496381 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.496385 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.496389 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.496393 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.496396 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.496400 | orchestrator | 2025-08-29 15:00:15.496404 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-08-29 15:00:15.496408 | orchestrator | Friday 29 August 2025 14:51:20 +0000 (0:00:00.832) 0:04:12.575 ********* 2025-08-29 15:00:15.496411 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.496415 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.496419 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.496422 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.496426 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.496430 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.496434 | orchestrator | 2025-08-29 15:00:15.496437 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-08-29 15:00:15.496441 | orchestrator | Friday 29 August 2025 14:51:22 +0000 (0:00:01.206) 0:04:13.781 ********* 2025-08-29 15:00:15.496445 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.496449 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.496452 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.496456 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.496459 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.496463 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.496467 | orchestrator | 2025-08-29 15:00:15.496470 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-08-29 15:00:15.496474 | orchestrator | Friday 29 August 2025 14:51:22 +0000 (0:00:00.778) 0:04:14.560 ********* 2025-08-29 15:00:15.496478 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.496482 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.496485 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.496489 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.496493 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.496496 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.496500 | orchestrator | 2025-08-29 15:00:15.496504 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-08-29 15:00:15.496507 | orchestrator | Friday 29 August 2025 14:51:23 +0000 (0:00:01.138) 0:04:15.698 ********* 2025-08-29 15:00:15.496511 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.496515 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.496518 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.496522 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.496526 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.496530 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.496533 | orchestrator | 2025-08-29 15:00:15.496537 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-08-29 15:00:15.496541 | orchestrator | Friday 29 August 2025 14:51:25 +0000 (0:00:01.595) 0:04:17.294 ********* 2025-08-29 15:00:15.496544 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.496548 | orchestrator | 2025-08-29 15:00:15.496552 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-08-29 15:00:15.496558 | orchestrator | Friday 29 August 2025 14:51:27 +0000 (0:00:01.646) 0:04:18.941 ********* 2025-08-29 15:00:15.496562 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-08-29 15:00:15.496565 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-08-29 15:00:15.496569 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-08-29 15:00:15.496573 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-08-29 15:00:15.496584 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-08-29 15:00:15.496590 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-08-29 15:00:15.496597 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-08-29 15:00:15.496603 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-08-29 15:00:15.496609 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-08-29 15:00:15.496615 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-08-29 15:00:15.496621 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-08-29 15:00:15.496627 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-08-29 15:00:15.496633 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-08-29 15:00:15.496639 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-08-29 15:00:15.496645 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-08-29 15:00:15.496651 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-08-29 15:00:15.496655 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-08-29 15:00:15.496658 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-08-29 15:00:15.496662 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-08-29 15:00:15.496666 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-08-29 15:00:15.496684 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-08-29 15:00:15.496688 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-08-29 15:00:15.496692 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-08-29 15:00:15.496695 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-08-29 15:00:15.496699 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-08-29 15:00:15.496703 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-08-29 15:00:15.496708 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-08-29 15:00:15.496715 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-08-29 15:00:15.496721 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-08-29 15:00:15.496727 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-08-29 15:00:15.496734 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-08-29 15:00:15.496740 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-08-29 15:00:15.496747 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-08-29 15:00:15.496753 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-08-29 15:00:15.496759 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-08-29 15:00:15.496766 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-08-29 15:00:15.496770 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-08-29 15:00:15.496774 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-08-29 15:00:15.496777 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-08-29 15:00:15.496781 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-08-29 15:00:15.496785 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:00:15.496788 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-08-29 15:00:15.496792 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-08-29 15:00:15.496796 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:00:15.496799 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:00:15.496803 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:00:15.496807 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:00:15.496810 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:00:15.496818 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:00:15.496822 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:00:15.496826 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:00:15.496829 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:00:15.496833 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:00:15.496837 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:00:15.496841 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:00:15.496879 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:00:15.496884 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:00:15.496887 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:00:15.496891 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:00:15.496895 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:00:15.496902 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:00:15.496905 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:00:15.496909 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:00:15.496913 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:00:15.496917 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:00:15.496920 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:00:15.496924 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:00:15.496928 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:00:15.496931 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:00:15.496935 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:00:15.496939 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:00:15.496942 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:00:15.496946 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:00:15.496950 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:00:15.496953 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:00:15.496957 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:00:15.496961 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:00:15.496964 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:00:15.496983 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:00:15.496987 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:00:15.496991 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:00:15.496995 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:00:15.496998 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-08-29 15:00:15.497002 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:00:15.497006 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-08-29 15:00:15.497010 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-08-29 15:00:15.497014 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:00:15.497021 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-08-29 15:00:15.497025 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-08-29 15:00:15.497029 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-08-29 15:00:15.497032 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-08-29 15:00:15.497036 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-08-29 15:00:15.497040 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-08-29 15:00:15.497043 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-08-29 15:00:15.497047 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-08-29 15:00:15.497051 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-08-29 15:00:15.497055 | orchestrator | 2025-08-29 15:00:15.497059 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-08-29 15:00:15.497062 | orchestrator | Friday 29 August 2025 14:51:33 +0000 (0:00:06.781) 0:04:25.723 ********* 2025-08-29 15:00:15.497066 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497070 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497074 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497077 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.497081 | orchestrator | 2025-08-29 15:00:15.497085 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-08-29 15:00:15.497089 | orchestrator | Friday 29 August 2025 14:51:35 +0000 (0:00:01.464) 0:04:27.188 ********* 2025-08-29 15:00:15.497095 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.497102 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.497108 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.497114 | orchestrator | 2025-08-29 15:00:15.497121 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-08-29 15:00:15.497127 | orchestrator | Friday 29 August 2025 14:51:36 +0000 (0:00:00.728) 0:04:27.916 ********* 2025-08-29 15:00:15.497133 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.497139 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.497149 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.497154 | orchestrator | 2025-08-29 15:00:15.497157 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-08-29 15:00:15.497161 | orchestrator | Friday 29 August 2025 14:51:37 +0000 (0:00:01.541) 0:04:29.458 ********* 2025-08-29 15:00:15.497165 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.497169 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.497173 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.497177 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497180 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497184 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497188 | orchestrator | 2025-08-29 15:00:15.497191 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-08-29 15:00:15.497195 | orchestrator | Friday 29 August 2025 14:51:38 +0000 (0:00:00.722) 0:04:30.180 ********* 2025-08-29 15:00:15.497199 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.497203 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.497206 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.497210 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497214 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497224 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497228 | orchestrator | 2025-08-29 15:00:15.497231 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-08-29 15:00:15.497235 | orchestrator | Friday 29 August 2025 14:51:39 +0000 (0:00:01.226) 0:04:31.407 ********* 2025-08-29 15:00:15.497239 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.497243 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.497246 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.497250 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497254 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497257 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497261 | orchestrator | 2025-08-29 15:00:15.497265 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-08-29 15:00:15.497268 | orchestrator | Friday 29 August 2025 14:51:40 +0000 (0:00:00.769) 0:04:32.176 ********* 2025-08-29 15:00:15.497286 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.497291 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.497295 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.497298 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497302 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497306 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497309 | orchestrator | 2025-08-29 15:00:15.497313 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-08-29 15:00:15.497317 | orchestrator | Friday 29 August 2025 14:51:41 +0000 (0:00:00.670) 0:04:32.847 ********* 2025-08-29 15:00:15.497321 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.497324 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.497328 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.497332 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497335 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497339 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497343 | orchestrator | 2025-08-29 15:00:15.497346 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-08-29 15:00:15.497350 | orchestrator | Friday 29 August 2025 14:51:42 +0000 (0:00:00.993) 0:04:33.841 ********* 2025-08-29 15:00:15.497354 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.497358 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.497361 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.497365 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497369 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497373 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497376 | orchestrator | 2025-08-29 15:00:15.497380 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-08-29 15:00:15.497384 | orchestrator | Friday 29 August 2025 14:51:42 +0000 (0:00:00.696) 0:04:34.538 ********* 2025-08-29 15:00:15.497388 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.497391 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.497395 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.497399 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497402 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497406 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497410 | orchestrator | 2025-08-29 15:00:15.497414 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-08-29 15:00:15.497418 | orchestrator | Friday 29 August 2025 14:51:43 +0000 (0:00:01.043) 0:04:35.581 ********* 2025-08-29 15:00:15.497421 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.497425 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.497429 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.497433 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497436 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497440 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497447 | orchestrator | 2025-08-29 15:00:15.497451 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-08-29 15:00:15.497455 | orchestrator | Friday 29 August 2025 14:51:44 +0000 (0:00:00.728) 0:04:36.310 ********* 2025-08-29 15:00:15.497459 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497462 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497466 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497470 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.497474 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.497478 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.497481 | orchestrator | 2025-08-29 15:00:15.497485 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-08-29 15:00:15.497491 | orchestrator | Friday 29 August 2025 14:51:48 +0000 (0:00:03.778) 0:04:40.089 ********* 2025-08-29 15:00:15.497498 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.497504 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.497510 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.497516 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497523 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497529 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497535 | orchestrator | 2025-08-29 15:00:15.497541 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-08-29 15:00:15.497552 | orchestrator | Friday 29 August 2025 14:51:48 +0000 (0:00:00.633) 0:04:40.722 ********* 2025-08-29 15:00:15.497559 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.497565 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.497572 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.497577 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497581 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497584 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497588 | orchestrator | 2025-08-29 15:00:15.497592 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-08-29 15:00:15.497596 | orchestrator | Friday 29 August 2025 14:51:50 +0000 (0:00:01.167) 0:04:41.890 ********* 2025-08-29 15:00:15.497600 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.497604 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.497607 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.497611 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497615 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497618 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497622 | orchestrator | 2025-08-29 15:00:15.497626 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-08-29 15:00:15.497630 | orchestrator | Friday 29 August 2025 14:51:50 +0000 (0:00:00.658) 0:04:42.548 ********* 2025-08-29 15:00:15.497634 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.497637 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.497641 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.497645 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497649 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497652 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497656 | orchestrator | 2025-08-29 15:00:15.497675 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-08-29 15:00:15.497679 | orchestrator | Friday 29 August 2025 14:51:52 +0000 (0:00:01.395) 0:04:43.944 ********* 2025-08-29 15:00:15.497685 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-08-29 15:00:15.497694 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-08-29 15:00:15.497699 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-08-29 15:00:15.497702 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-08-29 15:00:15.497706 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.497710 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-08-29 15:00:15.497714 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-08-29 15:00:15.497718 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.497721 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.497725 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497729 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497733 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497736 | orchestrator | 2025-08-29 15:00:15.497740 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-08-29 15:00:15.497744 | orchestrator | Friday 29 August 2025 14:51:52 +0000 (0:00:00.776) 0:04:44.721 ********* 2025-08-29 15:00:15.497747 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.497751 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.497755 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.497759 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497762 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497766 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497770 | orchestrator | 2025-08-29 15:00:15.497773 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-08-29 15:00:15.497779 | orchestrator | Friday 29 August 2025 14:51:53 +0000 (0:00:01.041) 0:04:45.762 ********* 2025-08-29 15:00:15.497783 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.497787 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.497790 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.497794 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497798 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497801 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497805 | orchestrator | 2025-08-29 15:00:15.497809 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 15:00:15.497813 | orchestrator | Friday 29 August 2025 14:51:54 +0000 (0:00:00.639) 0:04:46.402 ********* 2025-08-29 15:00:15.497816 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.497820 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.497824 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.497827 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497831 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497837 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497841 | orchestrator | 2025-08-29 15:00:15.497877 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 15:00:15.497881 | orchestrator | Friday 29 August 2025 14:51:55 +0000 (0:00:01.133) 0:04:47.535 ********* 2025-08-29 15:00:15.497885 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.497889 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.497892 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.497896 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497900 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497904 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497907 | orchestrator | 2025-08-29 15:00:15.497911 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 15:00:15.497915 | orchestrator | Friday 29 August 2025 14:51:56 +0000 (0:00:00.829) 0:04:48.364 ********* 2025-08-29 15:00:15.497919 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.497935 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.497940 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.497944 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497948 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497951 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.497955 | orchestrator | 2025-08-29 15:00:15.497959 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 15:00:15.497963 | orchestrator | Friday 29 August 2025 14:51:57 +0000 (0:00:01.039) 0:04:49.404 ********* 2025-08-29 15:00:15.497966 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.497971 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.497977 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.497984 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.497989 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.497996 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.498002 | orchestrator | 2025-08-29 15:00:15.498008 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 15:00:15.498033 | orchestrator | Friday 29 August 2025 14:51:58 +0000 (0:00:00.852) 0:04:50.256 ********* 2025-08-29 15:00:15.498038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.498042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.498046 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.498050 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498054 | orchestrator | 2025-08-29 15:00:15.498057 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 15:00:15.498061 | orchestrator | Friday 29 August 2025 14:51:59 +0000 (0:00:00.921) 0:04:51.178 ********* 2025-08-29 15:00:15.498065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.498069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.498073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.498076 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498080 | orchestrator | 2025-08-29 15:00:15.498084 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 15:00:15.498088 | orchestrator | Friday 29 August 2025 14:52:00 +0000 (0:00:00.871) 0:04:52.050 ********* 2025-08-29 15:00:15.498091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.498095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.498099 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.498103 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498106 | orchestrator | 2025-08-29 15:00:15.498110 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 15:00:15.498114 | orchestrator | Friday 29 August 2025 14:52:01 +0000 (0:00:01.097) 0:04:53.147 ********* 2025-08-29 15:00:15.498118 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.498126 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.498130 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.498134 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.498137 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.498141 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.498145 | orchestrator | 2025-08-29 15:00:15.498149 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 15:00:15.498153 | orchestrator | Friday 29 August 2025 14:52:02 +0000 (0:00:00.769) 0:04:53.917 ********* 2025-08-29 15:00:15.498156 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:00:15.498160 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:00:15.498164 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-08-29 15:00:15.498168 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 15:00:15.498171 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.498175 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-08-29 15:00:15.498179 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.498182 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-08-29 15:00:15.498186 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.498190 | orchestrator | 2025-08-29 15:00:15.498194 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-08-29 15:00:15.498200 | orchestrator | Friday 29 August 2025 14:52:04 +0000 (0:00:02.786) 0:04:56.703 ********* 2025-08-29 15:00:15.498204 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.498208 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.498212 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.498215 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.498219 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.498223 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.498227 | orchestrator | 2025-08-29 15:00:15.498230 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:00:15.498234 | orchestrator | Friday 29 August 2025 14:52:08 +0000 (0:00:03.599) 0:05:00.303 ********* 2025-08-29 15:00:15.498238 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.498242 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.498245 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.498249 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.498253 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.498257 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.498260 | orchestrator | 2025-08-29 15:00:15.498264 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 15:00:15.498268 | orchestrator | Friday 29 August 2025 14:52:10 +0000 (0:00:01.760) 0:05:02.063 ********* 2025-08-29 15:00:15.498272 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498276 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.498280 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.498283 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.498287 | orchestrator | 2025-08-29 15:00:15.498291 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 15:00:15.498295 | orchestrator | Friday 29 August 2025 14:52:11 +0000 (0:00:01.269) 0:05:03.333 ********* 2025-08-29 15:00:15.498298 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.498302 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.498306 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.498310 | orchestrator | 2025-08-29 15:00:15.498328 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 15:00:15.498333 | orchestrator | Friday 29 August 2025 14:52:11 +0000 (0:00:00.401) 0:05:03.734 ********* 2025-08-29 15:00:15.498337 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.498340 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.498344 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.498348 | orchestrator | 2025-08-29 15:00:15.498351 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 15:00:15.498359 | orchestrator | Friday 29 August 2025 14:52:13 +0000 (0:00:01.387) 0:05:05.122 ********* 2025-08-29 15:00:15.498363 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:00:15.498366 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:00:15.498370 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:00:15.498374 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.498380 | orchestrator | 2025-08-29 15:00:15.498386 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 15:00:15.498392 | orchestrator | Friday 29 August 2025 14:52:14 +0000 (0:00:01.194) 0:05:06.317 ********* 2025-08-29 15:00:15.498399 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.498405 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.498412 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.498418 | orchestrator | 2025-08-29 15:00:15.498425 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 15:00:15.498430 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:00.478) 0:05:06.795 ********* 2025-08-29 15:00:15.498434 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.498438 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.498442 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.498445 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.498449 | orchestrator | 2025-08-29 15:00:15.498453 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 15:00:15.498457 | orchestrator | Friday 29 August 2025 14:52:16 +0000 (0:00:01.469) 0:05:08.264 ********* 2025-08-29 15:00:15.498460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.498464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.498468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.498471 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498475 | orchestrator | 2025-08-29 15:00:15.498479 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 15:00:15.498483 | orchestrator | Friday 29 August 2025 14:52:16 +0000 (0:00:00.474) 0:05:08.738 ********* 2025-08-29 15:00:15.498486 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498490 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.498494 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.498497 | orchestrator | 2025-08-29 15:00:15.498501 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 15:00:15.498505 | orchestrator | Friday 29 August 2025 14:52:17 +0000 (0:00:00.747) 0:05:09.485 ********* 2025-08-29 15:00:15.498509 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498512 | orchestrator | 2025-08-29 15:00:15.498516 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 15:00:15.498520 | orchestrator | Friday 29 August 2025 14:52:18 +0000 (0:00:00.351) 0:05:09.837 ********* 2025-08-29 15:00:15.498524 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.498527 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.498531 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498535 | orchestrator | 2025-08-29 15:00:15.498538 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 15:00:15.498542 | orchestrator | Friday 29 August 2025 14:52:18 +0000 (0:00:00.638) 0:05:10.476 ********* 2025-08-29 15:00:15.498546 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498549 | orchestrator | 2025-08-29 15:00:15.498553 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 15:00:15.498560 | orchestrator | Friday 29 August 2025 14:52:18 +0000 (0:00:00.260) 0:05:10.737 ********* 2025-08-29 15:00:15.498564 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498567 | orchestrator | 2025-08-29 15:00:15.498571 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 15:00:15.498579 | orchestrator | Friday 29 August 2025 14:52:19 +0000 (0:00:00.247) 0:05:10.985 ********* 2025-08-29 15:00:15.498582 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498586 | orchestrator | 2025-08-29 15:00:15.498590 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 15:00:15.498593 | orchestrator | Friday 29 August 2025 14:52:19 +0000 (0:00:00.139) 0:05:11.125 ********* 2025-08-29 15:00:15.498597 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498601 | orchestrator | 2025-08-29 15:00:15.498604 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 15:00:15.498608 | orchestrator | Friday 29 August 2025 14:52:19 +0000 (0:00:00.261) 0:05:11.386 ********* 2025-08-29 15:00:15.498612 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498616 | orchestrator | 2025-08-29 15:00:15.498619 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 15:00:15.498623 | orchestrator | Friday 29 August 2025 14:52:19 +0000 (0:00:00.301) 0:05:11.688 ********* 2025-08-29 15:00:15.498627 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.498630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.498634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.498638 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498642 | orchestrator | 2025-08-29 15:00:15.498645 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 15:00:15.498649 | orchestrator | Friday 29 August 2025 14:52:20 +0000 (0:00:00.827) 0:05:12.515 ********* 2025-08-29 15:00:15.498653 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498669 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.498673 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.498677 | orchestrator | 2025-08-29 15:00:15.498680 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 15:00:15.498684 | orchestrator | Friday 29 August 2025 14:52:21 +0000 (0:00:00.608) 0:05:13.124 ********* 2025-08-29 15:00:15.498688 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498692 | orchestrator | 2025-08-29 15:00:15.498695 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 15:00:15.498699 | orchestrator | Friday 29 August 2025 14:52:21 +0000 (0:00:00.259) 0:05:13.383 ********* 2025-08-29 15:00:15.498703 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498707 | orchestrator | 2025-08-29 15:00:15.498710 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 15:00:15.498714 | orchestrator | Friday 29 August 2025 14:52:21 +0000 (0:00:00.224) 0:05:13.608 ********* 2025-08-29 15:00:15.498718 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.498722 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.498726 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.498729 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.498733 | orchestrator | 2025-08-29 15:00:15.498737 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 15:00:15.498741 | orchestrator | Friday 29 August 2025 14:52:22 +0000 (0:00:01.080) 0:05:14.688 ********* 2025-08-29 15:00:15.498744 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.498748 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.498752 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.498756 | orchestrator | 2025-08-29 15:00:15.498760 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 15:00:15.498763 | orchestrator | Friday 29 August 2025 14:52:23 +0000 (0:00:00.588) 0:05:15.277 ********* 2025-08-29 15:00:15.498767 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.498771 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.498774 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.498778 | orchestrator | 2025-08-29 15:00:15.498782 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 15:00:15.498789 | orchestrator | Friday 29 August 2025 14:52:24 +0000 (0:00:01.153) 0:05:16.430 ********* 2025-08-29 15:00:15.498793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.498796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.498800 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.498804 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498807 | orchestrator | 2025-08-29 15:00:15.498811 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 15:00:15.498815 | orchestrator | Friday 29 August 2025 14:52:25 +0000 (0:00:00.612) 0:05:17.042 ********* 2025-08-29 15:00:15.498819 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.498823 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.498826 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.498830 | orchestrator | 2025-08-29 15:00:15.498834 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 15:00:15.498838 | orchestrator | Friday 29 August 2025 14:52:25 +0000 (0:00:00.324) 0:05:17.367 ********* 2025-08-29 15:00:15.498841 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.498860 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.498864 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.498867 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.498871 | orchestrator | 2025-08-29 15:00:15.498875 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 15:00:15.498879 | orchestrator | Friday 29 August 2025 14:52:26 +0000 (0:00:01.010) 0:05:18.377 ********* 2025-08-29 15:00:15.498883 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.498887 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.498890 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.498894 | orchestrator | 2025-08-29 15:00:15.498898 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 15:00:15.498904 | orchestrator | Friday 29 August 2025 14:52:26 +0000 (0:00:00.316) 0:05:18.694 ********* 2025-08-29 15:00:15.498908 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.498912 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.498916 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.498920 | orchestrator | 2025-08-29 15:00:15.498924 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 15:00:15.498928 | orchestrator | Friday 29 August 2025 14:52:28 +0000 (0:00:01.547) 0:05:20.241 ********* 2025-08-29 15:00:15.498932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.498935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.498939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.498943 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498947 | orchestrator | 2025-08-29 15:00:15.498950 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 15:00:15.498954 | orchestrator | Friday 29 August 2025 14:52:29 +0000 (0:00:00.771) 0:05:21.012 ********* 2025-08-29 15:00:15.498958 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.498962 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.498965 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.498969 | orchestrator | 2025-08-29 15:00:15.498973 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-08-29 15:00:15.498977 | orchestrator | Friday 29 August 2025 14:52:29 +0000 (0:00:00.542) 0:05:21.554 ********* 2025-08-29 15:00:15.498980 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.498984 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.498988 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.498991 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.498995 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.498999 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.499006 | orchestrator | 2025-08-29 15:00:15.499009 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 15:00:15.499027 | orchestrator | Friday 29 August 2025 14:52:30 +0000 (0:00:00.851) 0:05:22.406 ********* 2025-08-29 15:00:15.499031 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.499035 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.499039 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.499043 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.499046 | orchestrator | 2025-08-29 15:00:15.499050 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 15:00:15.499054 | orchestrator | Friday 29 August 2025 14:52:32 +0000 (0:00:01.439) 0:05:23.846 ********* 2025-08-29 15:00:15.499058 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499061 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499065 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499069 | orchestrator | 2025-08-29 15:00:15.499073 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 15:00:15.499077 | orchestrator | Friday 29 August 2025 14:52:32 +0000 (0:00:00.581) 0:05:24.428 ********* 2025-08-29 15:00:15.499080 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.499084 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.499087 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.499091 | orchestrator | 2025-08-29 15:00:15.499095 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 15:00:15.499099 | orchestrator | Friday 29 August 2025 14:52:34 +0000 (0:00:01.648) 0:05:26.076 ********* 2025-08-29 15:00:15.499103 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:00:15.499106 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:00:15.499110 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:00:15.499114 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.499118 | orchestrator | 2025-08-29 15:00:15.499121 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 15:00:15.499125 | orchestrator | Friday 29 August 2025 14:52:35 +0000 (0:00:00.783) 0:05:26.860 ********* 2025-08-29 15:00:15.499129 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499133 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499136 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499140 | orchestrator | 2025-08-29 15:00:15.499144 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-08-29 15:00:15.499148 | orchestrator | 2025-08-29 15:00:15.499152 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:15.499155 | orchestrator | Friday 29 August 2025 14:52:35 +0000 (0:00:00.686) 0:05:27.546 ********* 2025-08-29 15:00:15.499159 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.499163 | orchestrator | 2025-08-29 15:00:15.499167 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:15.499171 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:00.756) 0:05:28.303 ********* 2025-08-29 15:00:15.499174 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-08-29 15:00:15.499178 | orchestrator | 2025-08-29 15:00:15.499182 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:15.499186 | orchestrator | Friday 29 August 2025 14:52:37 +0000 (0:00:00.565) 0:05:28.868 ********* 2025-08-29 15:00:15.499190 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499194 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499198 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499202 | orchestrator | 2025-08-29 15:00:15.499206 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:15.499210 | orchestrator | Friday 29 August 2025 14:52:37 +0000 (0:00:00.694) 0:05:29.562 ********* 2025-08-29 15:00:15.499217 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.499221 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.499225 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.499228 | orchestrator | 2025-08-29 15:00:15.499237 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:15.499241 | orchestrator | Friday 29 August 2025 14:52:38 +0000 (0:00:00.705) 0:05:30.268 ********* 2025-08-29 15:00:15.499244 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.499248 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.499252 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.499256 | orchestrator | 2025-08-29 15:00:15.499259 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:15.499263 | orchestrator | Friday 29 August 2025 14:52:38 +0000 (0:00:00.359) 0:05:30.627 ********* 2025-08-29 15:00:15.499267 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.499271 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.499274 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.499278 | orchestrator | 2025-08-29 15:00:15.499282 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:15.499286 | orchestrator | Friday 29 August 2025 14:52:39 +0000 (0:00:00.425) 0:05:31.053 ********* 2025-08-29 15:00:15.499289 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499293 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499297 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499300 | orchestrator | 2025-08-29 15:00:15.499304 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:15.499308 | orchestrator | Friday 29 August 2025 14:52:40 +0000 (0:00:00.974) 0:05:32.028 ********* 2025-08-29 15:00:15.499312 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.499315 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.499319 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.499323 | orchestrator | 2025-08-29 15:00:15.499327 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:15.499330 | orchestrator | Friday 29 August 2025 14:52:40 +0000 (0:00:00.631) 0:05:32.660 ********* 2025-08-29 15:00:15.499334 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.499338 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.499342 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.499345 | orchestrator | 2025-08-29 15:00:15.499361 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:15.499366 | orchestrator | Friday 29 August 2025 14:52:41 +0000 (0:00:00.867) 0:05:33.528 ********* 2025-08-29 15:00:15.499369 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499373 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499377 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499381 | orchestrator | 2025-08-29 15:00:15.499384 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:15.499388 | orchestrator | Friday 29 August 2025 14:52:42 +0000 (0:00:00.745) 0:05:34.273 ********* 2025-08-29 15:00:15.499392 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499396 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499399 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499403 | orchestrator | 2025-08-29 15:00:15.499407 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:15.499411 | orchestrator | Friday 29 August 2025 14:52:43 +0000 (0:00:01.173) 0:05:35.447 ********* 2025-08-29 15:00:15.499414 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.499418 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.499422 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.499426 | orchestrator | 2025-08-29 15:00:15.499429 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:15.499433 | orchestrator | Friday 29 August 2025 14:52:44 +0000 (0:00:00.718) 0:05:36.165 ********* 2025-08-29 15:00:15.499442 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499446 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499450 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499453 | orchestrator | 2025-08-29 15:00:15.499457 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:15.499461 | orchestrator | Friday 29 August 2025 14:52:45 +0000 (0:00:01.277) 0:05:37.442 ********* 2025-08-29 15:00:15.499465 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.499468 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.499472 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.499476 | orchestrator | 2025-08-29 15:00:15.499480 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:15.499484 | orchestrator | Friday 29 August 2025 14:52:46 +0000 (0:00:00.508) 0:05:37.951 ********* 2025-08-29 15:00:15.499487 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.499491 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.499496 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.499501 | orchestrator | 2025-08-29 15:00:15.499509 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:15.499516 | orchestrator | Friday 29 August 2025 14:52:46 +0000 (0:00:00.372) 0:05:38.323 ********* 2025-08-29 15:00:15.499521 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.499528 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.499534 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.499540 | orchestrator | 2025-08-29 15:00:15.499546 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:15.499553 | orchestrator | Friday 29 August 2025 14:52:46 +0000 (0:00:00.348) 0:05:38.672 ********* 2025-08-29 15:00:15.499557 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.499561 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.499565 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.499568 | orchestrator | 2025-08-29 15:00:15.499572 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:15.499576 | orchestrator | Friday 29 August 2025 14:52:47 +0000 (0:00:00.816) 0:05:39.489 ********* 2025-08-29 15:00:15.499579 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.499583 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.499587 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.499590 | orchestrator | 2025-08-29 15:00:15.499594 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:15.499598 | orchestrator | Friday 29 August 2025 14:52:48 +0000 (0:00:00.381) 0:05:39.870 ********* 2025-08-29 15:00:15.499602 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499605 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499609 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499613 | orchestrator | 2025-08-29 15:00:15.499617 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:15.499623 | orchestrator | Friday 29 August 2025 14:52:48 +0000 (0:00:00.345) 0:05:40.216 ********* 2025-08-29 15:00:15.499627 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499631 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499635 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499638 | orchestrator | 2025-08-29 15:00:15.499642 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:15.499646 | orchestrator | Friday 29 August 2025 14:52:48 +0000 (0:00:00.349) 0:05:40.565 ********* 2025-08-29 15:00:15.499650 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499654 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499658 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499661 | orchestrator | 2025-08-29 15:00:15.499665 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-08-29 15:00:15.499669 | orchestrator | Friday 29 August 2025 14:52:49 +0000 (0:00:00.919) 0:05:41.485 ********* 2025-08-29 15:00:15.499673 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499677 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499683 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499687 | orchestrator | 2025-08-29 15:00:15.499691 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-08-29 15:00:15.499695 | orchestrator | Friday 29 August 2025 14:52:50 +0000 (0:00:00.361) 0:05:41.846 ********* 2025-08-29 15:00:15.499699 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.499702 | orchestrator | 2025-08-29 15:00:15.499706 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-08-29 15:00:15.499710 | orchestrator | Friday 29 August 2025 14:52:50 +0000 (0:00:00.554) 0:05:42.401 ********* 2025-08-29 15:00:15.499714 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.499717 | orchestrator | 2025-08-29 15:00:15.499721 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-08-29 15:00:15.499738 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:00.433) 0:05:42.835 ********* 2025-08-29 15:00:15.499742 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-08-29 15:00:15.499746 | orchestrator | 2025-08-29 15:00:15.499750 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-08-29 15:00:15.499754 | orchestrator | Friday 29 August 2025 14:52:52 +0000 (0:00:01.067) 0:05:43.902 ********* 2025-08-29 15:00:15.499758 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499761 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499765 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499769 | orchestrator | 2025-08-29 15:00:15.499773 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-08-29 15:00:15.499777 | orchestrator | Friday 29 August 2025 14:52:52 +0000 (0:00:00.365) 0:05:44.268 ********* 2025-08-29 15:00:15.499780 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499784 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499788 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499792 | orchestrator | 2025-08-29 15:00:15.499795 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-08-29 15:00:15.499799 | orchestrator | Friday 29 August 2025 14:52:52 +0000 (0:00:00.467) 0:05:44.736 ********* 2025-08-29 15:00:15.499803 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.499807 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.499811 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.499815 | orchestrator | 2025-08-29 15:00:15.499818 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-08-29 15:00:15.499823 | orchestrator | Friday 29 August 2025 14:52:54 +0000 (0:00:01.933) 0:05:46.670 ********* 2025-08-29 15:00:15.499827 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.499830 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.499834 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.499838 | orchestrator | 2025-08-29 15:00:15.499842 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-08-29 15:00:15.499862 | orchestrator | Friday 29 August 2025 14:52:56 +0000 (0:00:01.148) 0:05:47.818 ********* 2025-08-29 15:00:15.499866 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.499870 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.499874 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.499877 | orchestrator | 2025-08-29 15:00:15.499881 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-08-29 15:00:15.499885 | orchestrator | Friday 29 August 2025 14:52:56 +0000 (0:00:00.799) 0:05:48.618 ********* 2025-08-29 15:00:15.499889 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499893 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.499897 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.499900 | orchestrator | 2025-08-29 15:00:15.499904 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-08-29 15:00:15.499908 | orchestrator | Friday 29 August 2025 14:52:57 +0000 (0:00:00.674) 0:05:49.293 ********* 2025-08-29 15:00:15.499912 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.499916 | orchestrator | 2025-08-29 15:00:15.499924 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-08-29 15:00:15.499928 | orchestrator | Friday 29 August 2025 14:52:59 +0000 (0:00:01.534) 0:05:50.828 ********* 2025-08-29 15:00:15.499931 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.499935 | orchestrator | 2025-08-29 15:00:15.499939 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-08-29 15:00:15.499943 | orchestrator | Friday 29 August 2025 14:52:59 +0000 (0:00:00.782) 0:05:51.610 ********* 2025-08-29 15:00:15.499946 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:00:15.499951 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:15.499954 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:15.499958 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:00:15.499962 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-08-29 15:00:15.499966 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:00:15.499969 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:00:15.499973 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-08-29 15:00:15.499979 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:00:15.499983 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-08-29 15:00:15.499987 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-08-29 15:00:15.499991 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-08-29 15:00:15.499995 | orchestrator | 2025-08-29 15:00:15.499999 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-08-29 15:00:15.500003 | orchestrator | Friday 29 August 2025 14:53:03 +0000 (0:00:03.735) 0:05:55.345 ********* 2025-08-29 15:00:15.500006 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.500010 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.500014 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.500018 | orchestrator | 2025-08-29 15:00:15.500022 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-08-29 15:00:15.500026 | orchestrator | Friday 29 August 2025 14:53:05 +0000 (0:00:02.018) 0:05:57.364 ********* 2025-08-29 15:00:15.500030 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.500033 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.500037 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.500041 | orchestrator | 2025-08-29 15:00:15.500045 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-08-29 15:00:15.500049 | orchestrator | Friday 29 August 2025 14:53:06 +0000 (0:00:00.455) 0:05:57.819 ********* 2025-08-29 15:00:15.500052 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.500056 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.500060 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.500063 | orchestrator | 2025-08-29 15:00:15.500067 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-08-29 15:00:15.500071 | orchestrator | Friday 29 August 2025 14:53:06 +0000 (0:00:00.376) 0:05:58.196 ********* 2025-08-29 15:00:15.500075 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.500078 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.500082 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.500086 | orchestrator | 2025-08-29 15:00:15.500103 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-08-29 15:00:15.500108 | orchestrator | Friday 29 August 2025 14:53:08 +0000 (0:00:02.036) 0:06:00.232 ********* 2025-08-29 15:00:15.500111 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.500115 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.500119 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.500123 | orchestrator | 2025-08-29 15:00:15.500126 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-08-29 15:00:15.500130 | orchestrator | Friday 29 August 2025 14:53:10 +0000 (0:00:01.958) 0:06:02.191 ********* 2025-08-29 15:00:15.500137 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.500141 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.500145 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.500149 | orchestrator | 2025-08-29 15:00:15.500152 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-08-29 15:00:15.500156 | orchestrator | Friday 29 August 2025 14:53:10 +0000 (0:00:00.429) 0:06:02.620 ********* 2025-08-29 15:00:15.500160 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.500164 | orchestrator | 2025-08-29 15:00:15.500168 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-08-29 15:00:15.500172 | orchestrator | Friday 29 August 2025 14:53:11 +0000 (0:00:00.862) 0:06:03.483 ********* 2025-08-29 15:00:15.500175 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.500179 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.500183 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.500187 | orchestrator | 2025-08-29 15:00:15.500190 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-08-29 15:00:15.500194 | orchestrator | Friday 29 August 2025 14:53:12 +0000 (0:00:00.821) 0:06:04.304 ********* 2025-08-29 15:00:15.500198 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.500202 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.500205 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.500209 | orchestrator | 2025-08-29 15:00:15.500213 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-08-29 15:00:15.500217 | orchestrator | Friday 29 August 2025 14:53:12 +0000 (0:00:00.410) 0:06:04.715 ********* 2025-08-29 15:00:15.500220 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.500224 | orchestrator | 2025-08-29 15:00:15.500228 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-08-29 15:00:15.500232 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:00.591) 0:06:05.306 ********* 2025-08-29 15:00:15.500235 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.500239 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.500243 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.500247 | orchestrator | 2025-08-29 15:00:15.500250 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-08-29 15:00:15.500254 | orchestrator | Friday 29 August 2025 14:53:16 +0000 (0:00:02.785) 0:06:08.092 ********* 2025-08-29 15:00:15.500258 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.500262 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.500266 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.500269 | orchestrator | 2025-08-29 15:00:15.500273 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-08-29 15:00:15.500277 | orchestrator | Friday 29 August 2025 14:53:18 +0000 (0:00:01.689) 0:06:09.781 ********* 2025-08-29 15:00:15.500280 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.500284 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.500288 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.500292 | orchestrator | 2025-08-29 15:00:15.500295 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-08-29 15:00:15.500299 | orchestrator | Friday 29 August 2025 14:53:20 +0000 (0:00:01.998) 0:06:11.779 ********* 2025-08-29 15:00:15.500303 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.500306 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.500310 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.500314 | orchestrator | 2025-08-29 15:00:15.500320 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-08-29 15:00:15.500324 | orchestrator | Friday 29 August 2025 14:53:22 +0000 (0:00:02.078) 0:06:13.858 ********* 2025-08-29 15:00:15.500328 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.500334 | orchestrator | 2025-08-29 15:00:15.500338 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-08-29 15:00:15.500342 | orchestrator | Friday 29 August 2025 14:53:23 +0000 (0:00:01.038) 0:06:14.896 ********* 2025-08-29 15:00:15.500346 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-08-29 15:00:15.500349 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.500353 | orchestrator | 2025-08-29 15:00:15.500357 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-08-29 15:00:15.500361 | orchestrator | Friday 29 August 2025 14:53:45 +0000 (0:00:22.031) 0:06:36.928 ********* 2025-08-29 15:00:15.500364 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.500368 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.500372 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.500376 | orchestrator | 2025-08-29 15:00:15.500379 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-08-29 15:00:15.500383 | orchestrator | Friday 29 August 2025 14:53:55 +0000 (0:00:10.413) 0:06:47.341 ********* 2025-08-29 15:00:15.500387 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.500390 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.500394 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.500398 | orchestrator | 2025-08-29 15:00:15.500402 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-08-29 15:00:15.500405 | orchestrator | Friday 29 August 2025 14:53:55 +0000 (0:00:00.321) 0:06:47.663 ********* 2025-08-29 15:00:15.500423 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__91de6e6c0b0445bd6032182895cf3b0523b927c0'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-08-29 15:00:15.500429 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__91de6e6c0b0445bd6032182895cf3b0523b927c0'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-08-29 15:00:15.500434 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__91de6e6c0b0445bd6032182895cf3b0523b927c0'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-08-29 15:00:15.500438 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__91de6e6c0b0445bd6032182895cf3b0523b927c0'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-08-29 15:00:15.500443 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__91de6e6c0b0445bd6032182895cf3b0523b927c0'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-08-29 15:00:15.500448 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__91de6e6c0b0445bd6032182895cf3b0523b927c0'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__91de6e6c0b0445bd6032182895cf3b0523b927c0'}])  2025-08-29 15:00:15.500456 | orchestrator | 2025-08-29 15:00:15.500460 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:00:15.500464 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:16.417) 0:07:04.081 ********* 2025-08-29 15:00:15.500468 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.500471 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.500475 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.500479 | orchestrator | 2025-08-29 15:00:15.500483 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 15:00:15.500487 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.362) 0:07:04.443 ********* 2025-08-29 15:00:15.500493 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.500497 | orchestrator | 2025-08-29 15:00:15.500501 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 15:00:15.500505 | orchestrator | Friday 29 August 2025 14:54:13 +0000 (0:00:00.578) 0:07:05.022 ********* 2025-08-29 15:00:15.500509 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.500513 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.500517 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.500520 | orchestrator | 2025-08-29 15:00:15.500524 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 15:00:15.500528 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:00.788) 0:07:05.812 ********* 2025-08-29 15:00:15.500532 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.500536 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.500539 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.500543 | orchestrator | 2025-08-29 15:00:15.500547 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 15:00:15.500551 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:00.400) 0:07:06.213 ********* 2025-08-29 15:00:15.500554 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:00:15.500558 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:00:15.500562 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:00:15.500565 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.500569 | orchestrator | 2025-08-29 15:00:15.500573 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 15:00:15.500577 | orchestrator | Friday 29 August 2025 14:54:15 +0000 (0:00:00.696) 0:07:06.910 ********* 2025-08-29 15:00:15.500580 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.500584 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.500588 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.500592 | orchestrator | 2025-08-29 15:00:15.500607 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-08-29 15:00:15.500611 | orchestrator | 2025-08-29 15:00:15.500615 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:15.500619 | orchestrator | Friday 29 August 2025 14:54:16 +0000 (0:00:00.972) 0:07:07.882 ********* 2025-08-29 15:00:15.500623 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.500627 | orchestrator | 2025-08-29 15:00:15.500631 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:15.500634 | orchestrator | Friday 29 August 2025 14:54:16 +0000 (0:00:00.532) 0:07:08.415 ********* 2025-08-29 15:00:15.500638 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.500642 | orchestrator | 2025-08-29 15:00:15.500646 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:15.500649 | orchestrator | Friday 29 August 2025 14:54:17 +0000 (0:00:00.535) 0:07:08.950 ********* 2025-08-29 15:00:15.500653 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.500657 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.500664 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.500667 | orchestrator | 2025-08-29 15:00:15.500671 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:15.500675 | orchestrator | Friday 29 August 2025 14:54:18 +0000 (0:00:01.281) 0:07:10.232 ********* 2025-08-29 15:00:15.500679 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.500683 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.500686 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.500690 | orchestrator | 2025-08-29 15:00:15.500694 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:15.500697 | orchestrator | Friday 29 August 2025 14:54:18 +0000 (0:00:00.375) 0:07:10.608 ********* 2025-08-29 15:00:15.500701 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.500705 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.500709 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.500712 | orchestrator | 2025-08-29 15:00:15.500716 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:15.500720 | orchestrator | Friday 29 August 2025 14:54:19 +0000 (0:00:00.348) 0:07:10.957 ********* 2025-08-29 15:00:15.500724 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.500727 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.500731 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.500735 | orchestrator | 2025-08-29 15:00:15.500738 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:15.500742 | orchestrator | Friday 29 August 2025 14:54:19 +0000 (0:00:00.506) 0:07:11.464 ********* 2025-08-29 15:00:15.500746 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.500750 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.500753 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.500757 | orchestrator | 2025-08-29 15:00:15.500761 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:15.500765 | orchestrator | Friday 29 August 2025 14:54:20 +0000 (0:00:01.154) 0:07:12.618 ********* 2025-08-29 15:00:15.500768 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.500772 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.500776 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.500779 | orchestrator | 2025-08-29 15:00:15.500783 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:15.500787 | orchestrator | Friday 29 August 2025 14:54:21 +0000 (0:00:00.341) 0:07:12.959 ********* 2025-08-29 15:00:15.500792 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.500798 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.500804 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.500810 | orchestrator | 2025-08-29 15:00:15.500816 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:15.500821 | orchestrator | Friday 29 August 2025 14:54:21 +0000 (0:00:00.312) 0:07:13.272 ********* 2025-08-29 15:00:15.500827 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.500833 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.500839 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.500857 | orchestrator | 2025-08-29 15:00:15.500864 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:15.500870 | orchestrator | Friday 29 August 2025 14:54:22 +0000 (0:00:00.801) 0:07:14.074 ********* 2025-08-29 15:00:15.500876 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.500882 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.500888 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.500894 | orchestrator | 2025-08-29 15:00:15.500949 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:15.500962 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:00:01.003) 0:07:15.077 ********* 2025-08-29 15:00:15.500966 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.500970 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.500973 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.500981 | orchestrator | 2025-08-29 15:00:15.500985 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:15.500988 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:00:00.346) 0:07:15.424 ********* 2025-08-29 15:00:15.500992 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.500996 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.501000 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.501004 | orchestrator | 2025-08-29 15:00:15.501007 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:15.501011 | orchestrator | Friday 29 August 2025 14:54:24 +0000 (0:00:00.389) 0:07:15.814 ********* 2025-08-29 15:00:15.501015 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.501019 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.501023 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.501026 | orchestrator | 2025-08-29 15:00:15.501030 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:15.501034 | orchestrator | Friday 29 August 2025 14:54:24 +0000 (0:00:00.317) 0:07:16.131 ********* 2025-08-29 15:00:15.501038 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.501041 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.501063 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.501068 | orchestrator | 2025-08-29 15:00:15.501072 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:15.501075 | orchestrator | Friday 29 August 2025 14:54:24 +0000 (0:00:00.599) 0:07:16.731 ********* 2025-08-29 15:00:15.501079 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.501083 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.501087 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.501090 | orchestrator | 2025-08-29 15:00:15.501094 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:15.501098 | orchestrator | Friday 29 August 2025 14:54:25 +0000 (0:00:00.476) 0:07:17.207 ********* 2025-08-29 15:00:15.501102 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.501105 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.501109 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.501113 | orchestrator | 2025-08-29 15:00:15.501117 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:15.501121 | orchestrator | Friday 29 August 2025 14:54:25 +0000 (0:00:00.324) 0:07:17.532 ********* 2025-08-29 15:00:15.501124 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.501128 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.501132 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.501136 | orchestrator | 2025-08-29 15:00:15.501139 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:15.501143 | orchestrator | Friday 29 August 2025 14:54:26 +0000 (0:00:00.330) 0:07:17.862 ********* 2025-08-29 15:00:15.501147 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.501151 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.501154 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.501158 | orchestrator | 2025-08-29 15:00:15.501162 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:15.501166 | orchestrator | Friday 29 August 2025 14:54:26 +0000 (0:00:00.341) 0:07:18.204 ********* 2025-08-29 15:00:15.501169 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.501173 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.501177 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.501180 | orchestrator | 2025-08-29 15:00:15.501185 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:15.501188 | orchestrator | Friday 29 August 2025 14:54:27 +0000 (0:00:00.676) 0:07:18.881 ********* 2025-08-29 15:00:15.501192 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.501196 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.501200 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.501203 | orchestrator | 2025-08-29 15:00:15.501207 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-08-29 15:00:15.501215 | orchestrator | Friday 29 August 2025 14:54:27 +0000 (0:00:00.581) 0:07:19.462 ********* 2025-08-29 15:00:15.501219 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:00:15.501223 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:00:15.501226 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:00:15.501230 | orchestrator | 2025-08-29 15:00:15.501234 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-08-29 15:00:15.501238 | orchestrator | Friday 29 August 2025 14:54:28 +0000 (0:00:00.888) 0:07:20.350 ********* 2025-08-29 15:00:15.501242 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.501245 | orchestrator | 2025-08-29 15:00:15.501249 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-08-29 15:00:15.501253 | orchestrator | Friday 29 August 2025 14:54:29 +0000 (0:00:00.866) 0:07:21.217 ********* 2025-08-29 15:00:15.501257 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.501261 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.501265 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.501268 | orchestrator | 2025-08-29 15:00:15.501272 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-08-29 15:00:15.501276 | orchestrator | Friday 29 August 2025 14:54:30 +0000 (0:00:00.826) 0:07:22.044 ********* 2025-08-29 15:00:15.501280 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.501286 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.501290 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.501293 | orchestrator | 2025-08-29 15:00:15.501297 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-08-29 15:00:15.501301 | orchestrator | Friday 29 August 2025 14:54:30 +0000 (0:00:00.446) 0:07:22.490 ********* 2025-08-29 15:00:15.501305 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:00:15.501308 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:00:15.501312 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:00:15.501316 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-08-29 15:00:15.501320 | orchestrator | 2025-08-29 15:00:15.501323 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-08-29 15:00:15.501327 | orchestrator | Friday 29 August 2025 14:54:42 +0000 (0:00:11.494) 0:07:33.985 ********* 2025-08-29 15:00:15.501331 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.501335 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.501338 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.501342 | orchestrator | 2025-08-29 15:00:15.501346 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-08-29 15:00:15.501350 | orchestrator | Friday 29 August 2025 14:54:42 +0000 (0:00:00.645) 0:07:34.630 ********* 2025-08-29 15:00:15.501353 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 15:00:15.501357 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:00:15.501361 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:00:15.501364 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 15:00:15.501368 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:15.501372 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:15.501376 | orchestrator | 2025-08-29 15:00:15.501393 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:00:15.501397 | orchestrator | Friday 29 August 2025 14:54:45 +0000 (0:00:02.254) 0:07:36.885 ********* 2025-08-29 15:00:15.501401 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 15:00:15.501404 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:00:15.501408 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:00:15.501412 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:00:15.501419 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 15:00:15.501422 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 15:00:15.501426 | orchestrator | 2025-08-29 15:00:15.501430 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-08-29 15:00:15.501433 | orchestrator | Friday 29 August 2025 14:54:46 +0000 (0:00:01.320) 0:07:38.206 ********* 2025-08-29 15:00:15.501437 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.501441 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.501445 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.501448 | orchestrator | 2025-08-29 15:00:15.501452 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-08-29 15:00:15.501456 | orchestrator | Friday 29 August 2025 14:54:47 +0000 (0:00:00.710) 0:07:38.917 ********* 2025-08-29 15:00:15.501459 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.501463 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.501467 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.501471 | orchestrator | 2025-08-29 15:00:15.501474 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-08-29 15:00:15.501478 | orchestrator | Friday 29 August 2025 14:54:47 +0000 (0:00:00.308) 0:07:39.226 ********* 2025-08-29 15:00:15.501482 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.501486 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.501489 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.501495 | orchestrator | 2025-08-29 15:00:15.501501 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-08-29 15:00:15.501507 | orchestrator | Friday 29 August 2025 14:54:48 +0000 (0:00:00.682) 0:07:39.909 ********* 2025-08-29 15:00:15.501513 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.501519 | orchestrator | 2025-08-29 15:00:15.501526 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-08-29 15:00:15.501532 | orchestrator | Friday 29 August 2025 14:54:48 +0000 (0:00:00.598) 0:07:40.507 ********* 2025-08-29 15:00:15.501538 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.501544 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.501550 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.501557 | orchestrator | 2025-08-29 15:00:15.501563 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-08-29 15:00:15.501570 | orchestrator | Friday 29 August 2025 14:54:49 +0000 (0:00:00.318) 0:07:40.826 ********* 2025-08-29 15:00:15.501574 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.501577 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.501581 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.501585 | orchestrator | 2025-08-29 15:00:15.501588 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-08-29 15:00:15.501592 | orchestrator | Friday 29 August 2025 14:54:49 +0000 (0:00:00.846) 0:07:41.673 ********* 2025-08-29 15:00:15.501596 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.501600 | orchestrator | 2025-08-29 15:00:15.501603 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-08-29 15:00:15.501607 | orchestrator | Friday 29 August 2025 14:54:50 +0000 (0:00:00.874) 0:07:42.547 ********* 2025-08-29 15:00:15.501611 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.501614 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.501618 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.501622 | orchestrator | 2025-08-29 15:00:15.501626 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-08-29 15:00:15.501629 | orchestrator | Friday 29 August 2025 14:54:52 +0000 (0:00:01.454) 0:07:44.001 ********* 2025-08-29 15:00:15.501636 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.501640 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.501644 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.501651 | orchestrator | 2025-08-29 15:00:15.501655 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-08-29 15:00:15.501659 | orchestrator | Friday 29 August 2025 14:54:53 +0000 (0:00:01.623) 0:07:45.625 ********* 2025-08-29 15:00:15.501662 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.501666 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.501670 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.501674 | orchestrator | 2025-08-29 15:00:15.501677 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-08-29 15:00:15.501681 | orchestrator | Friday 29 August 2025 14:54:55 +0000 (0:00:01.785) 0:07:47.410 ********* 2025-08-29 15:00:15.501685 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.501689 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.501692 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.501696 | orchestrator | 2025-08-29 15:00:15.501700 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-08-29 15:00:15.501704 | orchestrator | Friday 29 August 2025 14:54:57 +0000 (0:00:01.872) 0:07:49.283 ********* 2025-08-29 15:00:15.501707 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.501711 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.501715 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-08-29 15:00:15.501718 | orchestrator | 2025-08-29 15:00:15.501722 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-08-29 15:00:15.501726 | orchestrator | Friday 29 August 2025 14:54:58 +0000 (0:00:00.528) 0:07:49.811 ********* 2025-08-29 15:00:15.501730 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-08-29 15:00:15.501749 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-08-29 15:00:15.501753 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-08-29 15:00:15.501757 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-08-29 15:00:15.501761 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:15.501764 | orchestrator | 2025-08-29 15:00:15.501768 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-08-29 15:00:15.501772 | orchestrator | Friday 29 August 2025 14:55:23 +0000 (0:00:25.072) 0:08:14.884 ********* 2025-08-29 15:00:15.501776 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:15.501779 | orchestrator | 2025-08-29 15:00:15.501783 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-08-29 15:00:15.501787 | orchestrator | Friday 29 August 2025 14:55:24 +0000 (0:00:01.266) 0:08:16.150 ********* 2025-08-29 15:00:15.501791 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.501794 | orchestrator | 2025-08-29 15:00:15.501798 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-08-29 15:00:15.501802 | orchestrator | Friday 29 August 2025 14:55:24 +0000 (0:00:00.318) 0:08:16.469 ********* 2025-08-29 15:00:15.501805 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.501809 | orchestrator | 2025-08-29 15:00:15.501813 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-08-29 15:00:15.501817 | orchestrator | Friday 29 August 2025 14:55:24 +0000 (0:00:00.161) 0:08:16.631 ********* 2025-08-29 15:00:15.501820 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-08-29 15:00:15.501824 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-08-29 15:00:15.501828 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-08-29 15:00:15.501831 | orchestrator | 2025-08-29 15:00:15.501835 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-08-29 15:00:15.501839 | orchestrator | Friday 29 August 2025 14:55:31 +0000 (0:00:06.434) 0:08:23.066 ********* 2025-08-29 15:00:15.501872 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-08-29 15:00:15.501877 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-08-29 15:00:15.501881 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-08-29 15:00:15.501884 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-08-29 15:00:15.501888 | orchestrator | 2025-08-29 15:00:15.501892 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:00:15.501895 | orchestrator | Friday 29 August 2025 14:55:36 +0000 (0:00:04.925) 0:08:27.991 ********* 2025-08-29 15:00:15.501899 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.501903 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.501907 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.501911 | orchestrator | 2025-08-29 15:00:15.501914 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 15:00:15.501918 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:00.985) 0:08:28.976 ********* 2025-08-29 15:00:15.501922 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.501926 | orchestrator | 2025-08-29 15:00:15.501930 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 15:00:15.501933 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:00.516) 0:08:29.493 ********* 2025-08-29 15:00:15.501937 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.501941 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.501944 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.501948 | orchestrator | 2025-08-29 15:00:15.501952 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 15:00:15.501958 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:00.319) 0:08:29.813 ********* 2025-08-29 15:00:15.501962 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.501966 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.501969 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.501973 | orchestrator | 2025-08-29 15:00:15.501977 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 15:00:15.501980 | orchestrator | Friday 29 August 2025 14:55:39 +0000 (0:00:01.430) 0:08:31.244 ********* 2025-08-29 15:00:15.501984 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:00:15.501988 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:00:15.501992 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:00:15.501995 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.501999 | orchestrator | 2025-08-29 15:00:15.502003 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 15:00:15.502007 | orchestrator | Friday 29 August 2025 14:55:40 +0000 (0:00:00.643) 0:08:31.888 ********* 2025-08-29 15:00:15.502010 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.502031 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.502034 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.502038 | orchestrator | 2025-08-29 15:00:15.502042 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-08-29 15:00:15.502046 | orchestrator | 2025-08-29 15:00:15.502050 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:15.502054 | orchestrator | Friday 29 August 2025 14:55:40 +0000 (0:00:00.604) 0:08:32.492 ********* 2025-08-29 15:00:15.502058 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.502062 | orchestrator | 2025-08-29 15:00:15.502066 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:15.502084 | orchestrator | Friday 29 August 2025 14:55:41 +0000 (0:00:00.779) 0:08:33.271 ********* 2025-08-29 15:00:15.502088 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.502096 | orchestrator | 2025-08-29 15:00:15.502100 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:15.502104 | orchestrator | Friday 29 August 2025 14:55:42 +0000 (0:00:00.518) 0:08:33.790 ********* 2025-08-29 15:00:15.502107 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.502111 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.502115 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.502118 | orchestrator | 2025-08-29 15:00:15.502122 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:15.502126 | orchestrator | Friday 29 August 2025 14:55:42 +0000 (0:00:00.292) 0:08:34.082 ********* 2025-08-29 15:00:15.502130 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.502133 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.502137 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.502141 | orchestrator | 2025-08-29 15:00:15.502144 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:15.502148 | orchestrator | Friday 29 August 2025 14:55:43 +0000 (0:00:01.120) 0:08:35.203 ********* 2025-08-29 15:00:15.502152 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.502156 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.502159 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.502163 | orchestrator | 2025-08-29 15:00:15.502167 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:15.502171 | orchestrator | Friday 29 August 2025 14:55:44 +0000 (0:00:00.771) 0:08:35.974 ********* 2025-08-29 15:00:15.502175 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.502178 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.502182 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.502186 | orchestrator | 2025-08-29 15:00:15.502189 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:15.502193 | orchestrator | Friday 29 August 2025 14:55:45 +0000 (0:00:00.844) 0:08:36.819 ********* 2025-08-29 15:00:15.502197 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.502201 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.502204 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.502208 | orchestrator | 2025-08-29 15:00:15.502212 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:15.502215 | orchestrator | Friday 29 August 2025 14:55:45 +0000 (0:00:00.311) 0:08:37.130 ********* 2025-08-29 15:00:15.502219 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.502223 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.502227 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.502230 | orchestrator | 2025-08-29 15:00:15.502234 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:15.502238 | orchestrator | Friday 29 August 2025 14:55:46 +0000 (0:00:00.682) 0:08:37.813 ********* 2025-08-29 15:00:15.502242 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.502245 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.502249 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.502253 | orchestrator | 2025-08-29 15:00:15.502257 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:15.502260 | orchestrator | Friday 29 August 2025 14:55:46 +0000 (0:00:00.388) 0:08:38.201 ********* 2025-08-29 15:00:15.502264 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.502268 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.502271 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.502275 | orchestrator | 2025-08-29 15:00:15.502279 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:15.502283 | orchestrator | Friday 29 August 2025 14:55:47 +0000 (0:00:00.742) 0:08:38.944 ********* 2025-08-29 15:00:15.502287 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.502290 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.502294 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.502301 | orchestrator | 2025-08-29 15:00:15.502304 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:15.502308 | orchestrator | Friday 29 August 2025 14:55:47 +0000 (0:00:00.779) 0:08:39.724 ********* 2025-08-29 15:00:15.502312 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.502318 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.502322 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.502325 | orchestrator | 2025-08-29 15:00:15.502329 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:15.502333 | orchestrator | Friday 29 August 2025 14:55:48 +0000 (0:00:00.674) 0:08:40.399 ********* 2025-08-29 15:00:15.502337 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.502340 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.502344 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.502348 | orchestrator | 2025-08-29 15:00:15.502352 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:15.502355 | orchestrator | Friday 29 August 2025 14:55:49 +0000 (0:00:00.436) 0:08:40.835 ********* 2025-08-29 15:00:15.502359 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.502363 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.502367 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.502370 | orchestrator | 2025-08-29 15:00:15.502374 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:15.502378 | orchestrator | Friday 29 August 2025 14:55:49 +0000 (0:00:00.444) 0:08:41.279 ********* 2025-08-29 15:00:15.502382 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.502385 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.502389 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.502393 | orchestrator | 2025-08-29 15:00:15.502396 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:15.502400 | orchestrator | Friday 29 August 2025 14:55:49 +0000 (0:00:00.376) 0:08:41.656 ********* 2025-08-29 15:00:15.502404 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.502407 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.502411 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.502415 | orchestrator | 2025-08-29 15:00:15.502419 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:15.502422 | orchestrator | Friday 29 August 2025 14:55:50 +0000 (0:00:00.724) 0:08:42.381 ********* 2025-08-29 15:00:15.502428 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.502432 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.502436 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.502440 | orchestrator | 2025-08-29 15:00:15.502444 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:15.502447 | orchestrator | Friday 29 August 2025 14:55:50 +0000 (0:00:00.381) 0:08:42.762 ********* 2025-08-29 15:00:15.502451 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.502455 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.502459 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.502462 | orchestrator | 2025-08-29 15:00:15.502466 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:15.502470 | orchestrator | Friday 29 August 2025 14:55:51 +0000 (0:00:00.351) 0:08:43.114 ********* 2025-08-29 15:00:15.502473 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.502477 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.502481 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.502484 | orchestrator | 2025-08-29 15:00:15.502488 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:15.502492 | orchestrator | Friday 29 August 2025 14:55:51 +0000 (0:00:00.316) 0:08:43.430 ********* 2025-08-29 15:00:15.502496 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.502499 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.502503 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.502507 | orchestrator | 2025-08-29 15:00:15.502510 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:15.502518 | orchestrator | Friday 29 August 2025 14:55:52 +0000 (0:00:00.691) 0:08:44.121 ********* 2025-08-29 15:00:15.502521 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.502525 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.502529 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.502533 | orchestrator | 2025-08-29 15:00:15.502537 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-08-29 15:00:15.502540 | orchestrator | Friday 29 August 2025 14:55:52 +0000 (0:00:00.565) 0:08:44.687 ********* 2025-08-29 15:00:15.502544 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.502548 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.502551 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.502555 | orchestrator | 2025-08-29 15:00:15.502559 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-08-29 15:00:15.502563 | orchestrator | Friday 29 August 2025 14:55:53 +0000 (0:00:00.384) 0:08:45.071 ********* 2025-08-29 15:00:15.502566 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:00:15.502570 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:00:15.502574 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:00:15.502577 | orchestrator | 2025-08-29 15:00:15.502581 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-08-29 15:00:15.502585 | orchestrator | Friday 29 August 2025 14:55:54 +0000 (0:00:01.034) 0:08:46.106 ********* 2025-08-29 15:00:15.502589 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.502592 | orchestrator | 2025-08-29 15:00:15.502596 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-08-29 15:00:15.502600 | orchestrator | Friday 29 August 2025 14:55:55 +0000 (0:00:00.974) 0:08:47.080 ********* 2025-08-29 15:00:15.502604 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.502608 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.502612 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.502615 | orchestrator | 2025-08-29 15:00:15.502619 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-08-29 15:00:15.502623 | orchestrator | Friday 29 August 2025 14:55:55 +0000 (0:00:00.363) 0:08:47.443 ********* 2025-08-29 15:00:15.502627 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.502631 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.502634 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.502638 | orchestrator | 2025-08-29 15:00:15.502642 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-08-29 15:00:15.502648 | orchestrator | Friday 29 August 2025 14:55:56 +0000 (0:00:00.407) 0:08:47.851 ********* 2025-08-29 15:00:15.502652 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.502656 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.502659 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.502663 | orchestrator | 2025-08-29 15:00:15.502667 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-08-29 15:00:15.502671 | orchestrator | Friday 29 August 2025 14:55:57 +0000 (0:00:00.987) 0:08:48.838 ********* 2025-08-29 15:00:15.502675 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.502679 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.502682 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.502686 | orchestrator | 2025-08-29 15:00:15.502690 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-08-29 15:00:15.502694 | orchestrator | Friday 29 August 2025 14:55:57 +0000 (0:00:00.381) 0:08:49.220 ********* 2025-08-29 15:00:15.502697 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 15:00:15.502701 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 15:00:15.502705 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 15:00:15.502715 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 15:00:15.502718 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 15:00:15.502722 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 15:00:15.502726 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 15:00:15.502734 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 15:00:15.502738 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 15:00:15.502742 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 15:00:15.502745 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 15:00:15.502749 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 15:00:15.502753 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 15:00:15.502757 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 15:00:15.502760 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 15:00:15.502764 | orchestrator | 2025-08-29 15:00:15.502768 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-08-29 15:00:15.502772 | orchestrator | Friday 29 August 2025 14:55:59 +0000 (0:00:02.229) 0:08:51.449 ********* 2025-08-29 15:00:15.502776 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.502779 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.502783 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.502787 | orchestrator | 2025-08-29 15:00:15.502791 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-08-29 15:00:15.502794 | orchestrator | Friday 29 August 2025 14:55:59 +0000 (0:00:00.310) 0:08:51.759 ********* 2025-08-29 15:00:15.502798 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.502802 | orchestrator | 2025-08-29 15:00:15.502806 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-08-29 15:00:15.502810 | orchestrator | Friday 29 August 2025 14:56:00 +0000 (0:00:00.878) 0:08:52.638 ********* 2025-08-29 15:00:15.502813 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 15:00:15.502817 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 15:00:15.502821 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 15:00:15.502825 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-08-29 15:00:15.502829 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-08-29 15:00:15.502833 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-08-29 15:00:15.502837 | orchestrator | 2025-08-29 15:00:15.502840 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-08-29 15:00:15.502854 | orchestrator | Friday 29 August 2025 14:56:01 +0000 (0:00:01.053) 0:08:53.691 ********* 2025-08-29 15:00:15.502858 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:15.502862 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:00:15.502866 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:00:15.502869 | orchestrator | 2025-08-29 15:00:15.502873 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:00:15.502877 | orchestrator | Friday 29 August 2025 14:56:04 +0000 (0:00:02.135) 0:08:55.827 ********* 2025-08-29 15:00:15.502881 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:00:15.502885 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:00:15.502892 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.502896 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:00:15.502900 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 15:00:15.502904 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.502908 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:00:15.502911 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 15:00:15.502915 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.502919 | orchestrator | 2025-08-29 15:00:15.502925 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-08-29 15:00:15.502929 | orchestrator | Friday 29 August 2025 14:56:05 +0000 (0:00:01.563) 0:08:57.390 ********* 2025-08-29 15:00:15.502933 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:15.502936 | orchestrator | 2025-08-29 15:00:15.502940 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-08-29 15:00:15.502944 | orchestrator | Friday 29 August 2025 14:56:07 +0000 (0:00:02.156) 0:08:59.547 ********* 2025-08-29 15:00:15.502948 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.502952 | orchestrator | 2025-08-29 15:00:15.502955 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-08-29 15:00:15.502959 | orchestrator | Friday 29 August 2025 14:56:08 +0000 (0:00:00.591) 0:09:00.139 ********* 2025-08-29 15:00:15.502963 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281', 'data_vg': 'ceph-dc8c4f7f-2eb1-5ff6-8642-584f5da1f281'}) 2025-08-29 15:00:15.502967 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8955e74f-f88a-5c8e-a869-5f490c143acc', 'data_vg': 'ceph-8955e74f-f88a-5c8e-a869-5f490c143acc'}) 2025-08-29 15:00:15.502971 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-73f6d854-e6b6-54de-b399-c089d2858352', 'data_vg': 'ceph-73f6d854-e6b6-54de-b399-c089d2858352'}) 2025-08-29 15:00:15.502975 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde', 'data_vg': 'ceph-74173feb-4ed6-53ea-9fd2-1d4ff9ba2fde'}) 2025-08-29 15:00:15.502981 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b0db6b07-6be9-5d1b-9597-ea455233b3a1', 'data_vg': 'ceph-b0db6b07-6be9-5d1b-9597-ea455233b3a1'}) 2025-08-29 15:00:15.502985 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-76bc2ac4-c5cd-591d-a103-fddbd09e4373', 'data_vg': 'ceph-76bc2ac4-c5cd-591d-a103-fddbd09e4373'}) 2025-08-29 15:00:15.502989 | orchestrator | 2025-08-29 15:00:15.502993 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-08-29 15:00:15.502997 | orchestrator | Friday 29 August 2025 14:56:49 +0000 (0:00:41.518) 0:09:41.657 ********* 2025-08-29 15:00:15.503001 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503004 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503008 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.503012 | orchestrator | 2025-08-29 15:00:15.503016 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-08-29 15:00:15.503019 | orchestrator | Friday 29 August 2025 14:56:50 +0000 (0:00:00.820) 0:09:42.477 ********* 2025-08-29 15:00:15.503023 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.503027 | orchestrator | 2025-08-29 15:00:15.503031 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-08-29 15:00:15.503034 | orchestrator | Friday 29 August 2025 14:56:51 +0000 (0:00:00.774) 0:09:43.251 ********* 2025-08-29 15:00:15.503038 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.503042 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.503046 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.503050 | orchestrator | 2025-08-29 15:00:15.503053 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-08-29 15:00:15.503061 | orchestrator | Friday 29 August 2025 14:56:52 +0000 (0:00:00.676) 0:09:43.928 ********* 2025-08-29 15:00:15.503065 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.503069 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.503072 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.503076 | orchestrator | 2025-08-29 15:00:15.503080 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-08-29 15:00:15.503084 | orchestrator | Friday 29 August 2025 14:56:54 +0000 (0:00:02.796) 0:09:46.725 ********* 2025-08-29 15:00:15.503087 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.503091 | orchestrator | 2025-08-29 15:00:15.503095 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-08-29 15:00:15.503099 | orchestrator | Friday 29 August 2025 14:56:55 +0000 (0:00:00.548) 0:09:47.273 ********* 2025-08-29 15:00:15.503102 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.503106 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.503110 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.503114 | orchestrator | 2025-08-29 15:00:15.503117 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-08-29 15:00:15.503121 | orchestrator | Friday 29 August 2025 14:56:56 +0000 (0:00:01.171) 0:09:48.444 ********* 2025-08-29 15:00:15.503125 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.503128 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.503132 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.503136 | orchestrator | 2025-08-29 15:00:15.503140 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-08-29 15:00:15.503143 | orchestrator | Friday 29 August 2025 14:56:58 +0000 (0:00:01.444) 0:09:49.889 ********* 2025-08-29 15:00:15.503147 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.503151 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.503155 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.503158 | orchestrator | 2025-08-29 15:00:15.503162 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-08-29 15:00:15.503166 | orchestrator | Friday 29 August 2025 14:57:00 +0000 (0:00:02.686) 0:09:52.575 ********* 2025-08-29 15:00:15.503170 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503173 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503177 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.503181 | orchestrator | 2025-08-29 15:00:15.503187 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-08-29 15:00:15.503191 | orchestrator | Friday 29 August 2025 14:57:01 +0000 (0:00:00.278) 0:09:52.854 ********* 2025-08-29 15:00:15.503194 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503200 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503206 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.503212 | orchestrator | 2025-08-29 15:00:15.503218 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-08-29 15:00:15.503224 | orchestrator | Friday 29 August 2025 14:57:01 +0000 (0:00:00.276) 0:09:53.130 ********* 2025-08-29 15:00:15.503230 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-08-29 15:00:15.503236 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:00:15.503243 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-08-29 15:00:15.503249 | orchestrator | ok: [testbed-node-3] => (item=2) 2025-08-29 15:00:15.503255 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-08-29 15:00:15.503261 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-08-29 15:00:15.503267 | orchestrator | 2025-08-29 15:00:15.503274 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-08-29 15:00:15.503278 | orchestrator | Friday 29 August 2025 14:57:02 +0000 (0:00:01.277) 0:09:54.408 ********* 2025-08-29 15:00:15.503281 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-08-29 15:00:15.503285 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-08-29 15:00:15.503289 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-08-29 15:00:15.503297 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-08-29 15:00:15.503300 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-08-29 15:00:15.503304 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-08-29 15:00:15.503308 | orchestrator | 2025-08-29 15:00:15.503311 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-08-29 15:00:15.503315 | orchestrator | Friday 29 August 2025 14:57:04 +0000 (0:00:02.086) 0:09:56.494 ********* 2025-08-29 15:00:15.503322 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-08-29 15:00:15.503326 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-08-29 15:00:15.503330 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-08-29 15:00:15.503334 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-08-29 15:00:15.503337 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-08-29 15:00:15.503341 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-08-29 15:00:15.503344 | orchestrator | 2025-08-29 15:00:15.503348 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-08-29 15:00:15.503352 | orchestrator | Friday 29 August 2025 14:57:08 +0000 (0:00:03.558) 0:10:00.052 ********* 2025-08-29 15:00:15.503356 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503359 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503363 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:15.503367 | orchestrator | 2025-08-29 15:00:15.503371 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-08-29 15:00:15.503374 | orchestrator | Friday 29 August 2025 14:57:11 +0000 (0:00:02.744) 0:10:02.797 ********* 2025-08-29 15:00:15.503378 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503382 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503386 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-08-29 15:00:15.503390 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:15.503393 | orchestrator | 2025-08-29 15:00:15.503397 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-08-29 15:00:15.503401 | orchestrator | Friday 29 August 2025 14:57:23 +0000 (0:00:12.839) 0:10:15.637 ********* 2025-08-29 15:00:15.503404 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503408 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503412 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.503416 | orchestrator | 2025-08-29 15:00:15.503419 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:00:15.503423 | orchestrator | Friday 29 August 2025 14:57:24 +0000 (0:00:00.797) 0:10:16.434 ********* 2025-08-29 15:00:15.503427 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503431 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503434 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.503438 | orchestrator | 2025-08-29 15:00:15.503442 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 15:00:15.503445 | orchestrator | Friday 29 August 2025 14:57:25 +0000 (0:00:00.468) 0:10:16.903 ********* 2025-08-29 15:00:15.503449 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.503453 | orchestrator | 2025-08-29 15:00:15.503457 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 15:00:15.503460 | orchestrator | Friday 29 August 2025 14:57:25 +0000 (0:00:00.471) 0:10:17.374 ********* 2025-08-29 15:00:15.503464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.503468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.503472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.503475 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503479 | orchestrator | 2025-08-29 15:00:15.503483 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 15:00:15.503491 | orchestrator | Friday 29 August 2025 14:57:25 +0000 (0:00:00.334) 0:10:17.709 ********* 2025-08-29 15:00:15.503495 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503498 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503502 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.503506 | orchestrator | 2025-08-29 15:00:15.503509 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 15:00:15.503513 | orchestrator | Friday 29 August 2025 14:57:26 +0000 (0:00:00.267) 0:10:17.977 ********* 2025-08-29 15:00:15.503517 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503521 | orchestrator | 2025-08-29 15:00:15.503527 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 15:00:15.503531 | orchestrator | Friday 29 August 2025 14:57:26 +0000 (0:00:00.274) 0:10:18.252 ********* 2025-08-29 15:00:15.503535 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503538 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503542 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.503546 | orchestrator | 2025-08-29 15:00:15.503549 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 15:00:15.503553 | orchestrator | Friday 29 August 2025 14:57:27 +0000 (0:00:00.520) 0:10:18.772 ********* 2025-08-29 15:00:15.503557 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503560 | orchestrator | 2025-08-29 15:00:15.503564 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 15:00:15.503568 | orchestrator | Friday 29 August 2025 14:57:27 +0000 (0:00:00.215) 0:10:18.988 ********* 2025-08-29 15:00:15.503572 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503576 | orchestrator | 2025-08-29 15:00:15.503579 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 15:00:15.503583 | orchestrator | Friday 29 August 2025 14:57:27 +0000 (0:00:00.193) 0:10:19.181 ********* 2025-08-29 15:00:15.503587 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503590 | orchestrator | 2025-08-29 15:00:15.503594 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 15:00:15.503598 | orchestrator | Friday 29 August 2025 14:57:27 +0000 (0:00:00.127) 0:10:19.309 ********* 2025-08-29 15:00:15.503602 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503606 | orchestrator | 2025-08-29 15:00:15.503609 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 15:00:15.503613 | orchestrator | Friday 29 August 2025 14:57:27 +0000 (0:00:00.205) 0:10:19.515 ********* 2025-08-29 15:00:15.503617 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503621 | orchestrator | 2025-08-29 15:00:15.503624 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 15:00:15.503631 | orchestrator | Friday 29 August 2025 14:57:27 +0000 (0:00:00.186) 0:10:19.701 ********* 2025-08-29 15:00:15.503635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.503639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.503642 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.503646 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503650 | orchestrator | 2025-08-29 15:00:15.503654 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 15:00:15.503657 | orchestrator | Friday 29 August 2025 14:57:28 +0000 (0:00:00.425) 0:10:20.127 ********* 2025-08-29 15:00:15.503661 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503665 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503669 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.503672 | orchestrator | 2025-08-29 15:00:15.503676 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 15:00:15.503680 | orchestrator | Friday 29 August 2025 14:57:28 +0000 (0:00:00.291) 0:10:20.418 ********* 2025-08-29 15:00:15.503684 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503687 | orchestrator | 2025-08-29 15:00:15.503695 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 15:00:15.503699 | orchestrator | Friday 29 August 2025 14:57:29 +0000 (0:00:00.576) 0:10:20.995 ********* 2025-08-29 15:00:15.503702 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503706 | orchestrator | 2025-08-29 15:00:15.503710 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-08-29 15:00:15.503714 | orchestrator | 2025-08-29 15:00:15.503717 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:15.503721 | orchestrator | Friday 29 August 2025 14:57:29 +0000 (0:00:00.609) 0:10:21.604 ********* 2025-08-29 15:00:15.503725 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.503729 | orchestrator | 2025-08-29 15:00:15.503733 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:15.503737 | orchestrator | Friday 29 August 2025 14:57:30 +0000 (0:00:01.077) 0:10:22.681 ********* 2025-08-29 15:00:15.503740 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.503744 | orchestrator | 2025-08-29 15:00:15.503748 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:15.503752 | orchestrator | Friday 29 August 2025 14:57:32 +0000 (0:00:01.243) 0:10:23.925 ********* 2025-08-29 15:00:15.503755 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503759 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503763 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.503767 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.503770 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.503774 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.503778 | orchestrator | 2025-08-29 15:00:15.503781 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:15.503785 | orchestrator | Friday 29 August 2025 14:57:33 +0000 (0:00:01.072) 0:10:24.998 ********* 2025-08-29 15:00:15.503789 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.503792 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.503796 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.503800 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.503804 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.503807 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.503811 | orchestrator | 2025-08-29 15:00:15.503815 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:15.503819 | orchestrator | Friday 29 August 2025 14:57:33 +0000 (0:00:00.643) 0:10:25.642 ********* 2025-08-29 15:00:15.503822 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.503826 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.503830 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.503836 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.503840 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.503857 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.503861 | orchestrator | 2025-08-29 15:00:15.503865 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:15.503869 | orchestrator | Friday 29 August 2025 14:57:34 +0000 (0:00:00.813) 0:10:26.455 ********* 2025-08-29 15:00:15.503872 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.503876 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.503880 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.503884 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.503887 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.503891 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.503895 | orchestrator | 2025-08-29 15:00:15.503898 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:15.503902 | orchestrator | Friday 29 August 2025 14:57:35 +0000 (0:00:00.707) 0:10:27.163 ********* 2025-08-29 15:00:15.503910 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503914 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503918 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.503921 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.503925 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.503929 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.503932 | orchestrator | 2025-08-29 15:00:15.503937 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:15.503940 | orchestrator | Friday 29 August 2025 14:57:36 +0000 (0:00:01.172) 0:10:28.335 ********* 2025-08-29 15:00:15.503944 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503948 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503952 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.503955 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.503959 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.503963 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.503966 | orchestrator | 2025-08-29 15:00:15.503970 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:15.503976 | orchestrator | Friday 29 August 2025 14:57:37 +0000 (0:00:00.871) 0:10:29.207 ********* 2025-08-29 15:00:15.503980 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.503984 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.503988 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.503992 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.503995 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.503999 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.504003 | orchestrator | 2025-08-29 15:00:15.504007 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:15.504010 | orchestrator | Friday 29 August 2025 14:57:38 +0000 (0:00:00.582) 0:10:29.789 ********* 2025-08-29 15:00:15.504014 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.504018 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.504022 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.504025 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.504029 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.504032 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.504036 | orchestrator | 2025-08-29 15:00:15.504040 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:15.504044 | orchestrator | Friday 29 August 2025 14:57:39 +0000 (0:00:01.252) 0:10:31.042 ********* 2025-08-29 15:00:15.504048 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.504051 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.504055 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.504059 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.504063 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.504066 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.504070 | orchestrator | 2025-08-29 15:00:15.504074 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:15.504077 | orchestrator | Friday 29 August 2025 14:57:40 +0000 (0:00:01.045) 0:10:32.087 ********* 2025-08-29 15:00:15.504081 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.504085 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.504089 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.504092 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.504096 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.504100 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.504103 | orchestrator | 2025-08-29 15:00:15.504107 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:15.504111 | orchestrator | Friday 29 August 2025 14:57:41 +0000 (0:00:00.903) 0:10:32.991 ********* 2025-08-29 15:00:15.504115 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.504119 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.504122 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.504126 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.504133 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.504137 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.504141 | orchestrator | 2025-08-29 15:00:15.504145 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:15.504148 | orchestrator | Friday 29 August 2025 14:57:41 +0000 (0:00:00.575) 0:10:33.567 ********* 2025-08-29 15:00:15.504152 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.504156 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.504159 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.504163 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.504167 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.504171 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.504174 | orchestrator | 2025-08-29 15:00:15.504178 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:15.504182 | orchestrator | Friday 29 August 2025 14:57:42 +0000 (0:00:00.840) 0:10:34.407 ********* 2025-08-29 15:00:15.504186 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.504189 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.504193 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.504197 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.504201 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.504204 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.504208 | orchestrator | 2025-08-29 15:00:15.504212 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:15.504216 | orchestrator | Friday 29 August 2025 14:57:43 +0000 (0:00:00.603) 0:10:35.010 ********* 2025-08-29 15:00:15.504219 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.504223 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.504229 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.504233 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.504237 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.504241 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.504245 | orchestrator | 2025-08-29 15:00:15.504248 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:15.504252 | orchestrator | Friday 29 August 2025 14:57:44 +0000 (0:00:00.891) 0:10:35.902 ********* 2025-08-29 15:00:15.504256 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.504260 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.504264 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.504267 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.504271 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.504275 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.504278 | orchestrator | 2025-08-29 15:00:15.504282 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:15.504286 | orchestrator | Friday 29 August 2025 14:57:44 +0000 (0:00:00.562) 0:10:36.464 ********* 2025-08-29 15:00:15.504290 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.504293 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.504297 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.504301 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:15.504304 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:15.504308 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:15.504312 | orchestrator | 2025-08-29 15:00:15.504315 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:15.504319 | orchestrator | Friday 29 August 2025 14:57:45 +0000 (0:00:00.718) 0:10:37.183 ********* 2025-08-29 15:00:15.504323 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.504326 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.504330 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.504334 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.504338 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.504341 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.504345 | orchestrator | 2025-08-29 15:00:15.504351 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:15.504358 | orchestrator | Friday 29 August 2025 14:57:45 +0000 (0:00:00.523) 0:10:37.706 ********* 2025-08-29 15:00:15.504362 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.504366 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.504370 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.504374 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.504377 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.504381 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.504385 | orchestrator | 2025-08-29 15:00:15.504389 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:15.504393 | orchestrator | Friday 29 August 2025 14:57:46 +0000 (0:00:00.669) 0:10:38.376 ********* 2025-08-29 15:00:15.504396 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.504400 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.504404 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.504407 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.504411 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.504415 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.504419 | orchestrator | 2025-08-29 15:00:15.504422 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-08-29 15:00:15.504426 | orchestrator | Friday 29 August 2025 14:57:47 +0000 (0:00:01.114) 0:10:39.490 ********* 2025-08-29 15:00:15.504430 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:15.504433 | orchestrator | 2025-08-29 15:00:15.504437 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-08-29 15:00:15.504441 | orchestrator | Friday 29 August 2025 14:57:51 +0000 (0:00:04.247) 0:10:43.738 ********* 2025-08-29 15:00:15.504445 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:15.504449 | orchestrator | 2025-08-29 15:00:15.504453 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-08-29 15:00:15.504457 | orchestrator | Friday 29 August 2025 14:57:54 +0000 (0:00:02.095) 0:10:45.834 ********* 2025-08-29 15:00:15.504460 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.504464 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.504468 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.504472 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.504475 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.504479 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.504483 | orchestrator | 2025-08-29 15:00:15.504486 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-08-29 15:00:15.504490 | orchestrator | Friday 29 August 2025 14:57:55 +0000 (0:00:01.436) 0:10:47.270 ********* 2025-08-29 15:00:15.504494 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.504498 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.504504 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.504510 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.504516 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.504523 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.504529 | orchestrator | 2025-08-29 15:00:15.504535 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-08-29 15:00:15.504542 | orchestrator | Friday 29 August 2025 14:57:56 +0000 (0:00:01.263) 0:10:48.534 ********* 2025-08-29 15:00:15.504548 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-2, testbed-node-1 2025-08-29 15:00:15.504554 | orchestrator | 2025-08-29 15:00:15.504561 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-08-29 15:00:15.504567 | orchestrator | Friday 29 August 2025 14:57:57 +0000 (0:00:01.159) 0:10:49.693 ********* 2025-08-29 15:00:15.504574 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.504579 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.504582 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.504586 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.504590 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.504607 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.504611 | orchestrator | 2025-08-29 15:00:15.504615 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-08-29 15:00:15.504619 | orchestrator | Friday 29 August 2025 14:57:59 +0000 (0:00:01.461) 0:10:51.155 ********* 2025-08-29 15:00:15.504626 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.504630 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.504634 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.504638 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.504642 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.504645 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.504649 | orchestrator | 2025-08-29 15:00:15.504653 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-08-29 15:00:15.504657 | orchestrator | Friday 29 August 2025 14:58:02 +0000 (0:00:03.452) 0:10:54.607 ********* 2025-08-29 15:00:15.504661 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:15.504664 | orchestrator | 2025-08-29 15:00:15.504669 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-08-29 15:00:15.504673 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:01.087) 0:10:55.695 ********* 2025-08-29 15:00:15.504676 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.504680 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.504684 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.504688 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.504691 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.504695 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.504699 | orchestrator | 2025-08-29 15:00:15.504702 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-08-29 15:00:15.504706 | orchestrator | Friday 29 August 2025 14:58:04 +0000 (0:00:00.510) 0:10:56.205 ********* 2025-08-29 15:00:15.504710 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.504714 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.504717 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.504721 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:15.504725 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:15.504729 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:15.504732 | orchestrator | 2025-08-29 15:00:15.504739 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-08-29 15:00:15.504743 | orchestrator | Friday 29 August 2025 14:58:07 +0000 (0:00:02.959) 0:10:59.164 ********* 2025-08-29 15:00:15.504747 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.504751 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.504754 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.504758 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:15.504762 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:15.504766 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:15.504769 | orchestrator | 2025-08-29 15:00:15.504773 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-08-29 15:00:15.504777 | orchestrator | 2025-08-29 15:00:15.504781 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:15.504785 | orchestrator | Friday 29 August 2025 14:58:08 +0000 (0:00:00.876) 0:11:00.040 ********* 2025-08-29 15:00:15.504789 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.504792 | orchestrator | 2025-08-29 15:00:15.504796 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:15.504800 | orchestrator | Friday 29 August 2025 14:58:09 +0000 (0:00:00.852) 0:11:00.892 ********* 2025-08-29 15:00:15.504804 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.504808 | orchestrator | 2025-08-29 15:00:15.504812 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:15.504818 | orchestrator | Friday 29 August 2025 14:58:09 +0000 (0:00:00.591) 0:11:01.484 ********* 2025-08-29 15:00:15.504822 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.504826 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.504830 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.504834 | orchestrator | 2025-08-29 15:00:15.504838 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:15.504841 | orchestrator | Friday 29 August 2025 14:58:10 +0000 (0:00:00.742) 0:11:02.226 ********* 2025-08-29 15:00:15.504873 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.504878 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.504881 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.504885 | orchestrator | 2025-08-29 15:00:15.504889 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:15.504892 | orchestrator | Friday 29 August 2025 14:58:11 +0000 (0:00:00.818) 0:11:03.045 ********* 2025-08-29 15:00:15.504896 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.504900 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.504904 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.504907 | orchestrator | 2025-08-29 15:00:15.504911 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:15.504915 | orchestrator | Friday 29 August 2025 14:58:12 +0000 (0:00:00.810) 0:11:03.856 ********* 2025-08-29 15:00:15.504918 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.504922 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.504926 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.504930 | orchestrator | 2025-08-29 15:00:15.504933 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:15.504937 | orchestrator | Friday 29 August 2025 14:58:12 +0000 (0:00:00.889) 0:11:04.745 ********* 2025-08-29 15:00:15.504941 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.504945 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.504948 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.504952 | orchestrator | 2025-08-29 15:00:15.504956 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:15.504960 | orchestrator | Friday 29 August 2025 14:58:13 +0000 (0:00:00.630) 0:11:05.376 ********* 2025-08-29 15:00:15.504963 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.504967 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.504971 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.504974 | orchestrator | 2025-08-29 15:00:15.504978 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:15.504982 | orchestrator | Friday 29 August 2025 14:58:13 +0000 (0:00:00.384) 0:11:05.760 ********* 2025-08-29 15:00:15.504988 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.504992 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.504996 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.504999 | orchestrator | 2025-08-29 15:00:15.505003 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:15.505007 | orchestrator | Friday 29 August 2025 14:58:14 +0000 (0:00:00.398) 0:11:06.159 ********* 2025-08-29 15:00:15.505011 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.505014 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.505018 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.505022 | orchestrator | 2025-08-29 15:00:15.505026 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:15.505029 | orchestrator | Friday 29 August 2025 14:58:15 +0000 (0:00:00.726) 0:11:06.885 ********* 2025-08-29 15:00:15.505033 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.505037 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.505040 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.505044 | orchestrator | 2025-08-29 15:00:15.505048 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:15.505052 | orchestrator | Friday 29 August 2025 14:58:16 +0000 (0:00:01.276) 0:11:08.161 ********* 2025-08-29 15:00:15.505059 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.505062 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.505066 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.505070 | orchestrator | 2025-08-29 15:00:15.505074 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:15.505078 | orchestrator | Friday 29 August 2025 14:58:16 +0000 (0:00:00.404) 0:11:08.565 ********* 2025-08-29 15:00:15.505081 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.505085 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.505089 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.505093 | orchestrator | 2025-08-29 15:00:15.505097 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:15.505103 | orchestrator | Friday 29 August 2025 14:58:17 +0000 (0:00:00.404) 0:11:08.970 ********* 2025-08-29 15:00:15.505107 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.505111 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.505115 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.505119 | orchestrator | 2025-08-29 15:00:15.505122 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:15.505126 | orchestrator | Friday 29 August 2025 14:58:17 +0000 (0:00:00.463) 0:11:09.433 ********* 2025-08-29 15:00:15.505130 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.505133 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.505137 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.505141 | orchestrator | 2025-08-29 15:00:15.505145 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:15.505148 | orchestrator | Friday 29 August 2025 14:58:18 +0000 (0:00:00.810) 0:11:10.244 ********* 2025-08-29 15:00:15.505152 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.505156 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.505159 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.505163 | orchestrator | 2025-08-29 15:00:15.505167 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:15.505171 | orchestrator | Friday 29 August 2025 14:58:18 +0000 (0:00:00.398) 0:11:10.643 ********* 2025-08-29 15:00:15.505174 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.505178 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.505182 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.505186 | orchestrator | 2025-08-29 15:00:15.505189 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:15.505193 | orchestrator | Friday 29 August 2025 14:58:19 +0000 (0:00:00.495) 0:11:11.139 ********* 2025-08-29 15:00:15.505197 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.505201 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.505204 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.505208 | orchestrator | 2025-08-29 15:00:15.505212 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:15.505215 | orchestrator | Friday 29 August 2025 14:58:19 +0000 (0:00:00.444) 0:11:11.583 ********* 2025-08-29 15:00:15.505219 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.505223 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.505227 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.505230 | orchestrator | 2025-08-29 15:00:15.505234 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:15.505238 | orchestrator | Friday 29 August 2025 14:58:20 +0000 (0:00:00.973) 0:11:12.557 ********* 2025-08-29 15:00:15.505242 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.505245 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.505249 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.505253 | orchestrator | 2025-08-29 15:00:15.505257 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:15.505260 | orchestrator | Friday 29 August 2025 14:58:21 +0000 (0:00:00.453) 0:11:13.010 ********* 2025-08-29 15:00:15.505264 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.505271 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.505275 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.505279 | orchestrator | 2025-08-29 15:00:15.505282 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-08-29 15:00:15.505286 | orchestrator | Friday 29 August 2025 14:58:21 +0000 (0:00:00.733) 0:11:13.744 ********* 2025-08-29 15:00:15.505290 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.505294 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.505297 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-08-29 15:00:15.505301 | orchestrator | 2025-08-29 15:00:15.505305 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-08-29 15:00:15.505309 | orchestrator | Friday 29 August 2025 14:58:22 +0000 (0:00:00.791) 0:11:14.536 ********* 2025-08-29 15:00:15.505312 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:15.505316 | orchestrator | 2025-08-29 15:00:15.505320 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-08-29 15:00:15.505323 | orchestrator | Friday 29 August 2025 14:58:25 +0000 (0:00:02.428) 0:11:16.964 ********* 2025-08-29 15:00:15.505330 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-08-29 15:00:15.505336 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.505339 | orchestrator | 2025-08-29 15:00:15.505343 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-08-29 15:00:15.505347 | orchestrator | Friday 29 August 2025 14:58:25 +0000 (0:00:00.252) 0:11:17.216 ********* 2025-08-29 15:00:15.505352 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:00:15.505358 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:00:15.505361 | orchestrator | 2025-08-29 15:00:15.505365 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-08-29 15:00:15.505369 | orchestrator | Friday 29 August 2025 14:58:34 +0000 (0:00:08.633) 0:11:25.850 ********* 2025-08-29 15:00:15.505373 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:00:15.505376 | orchestrator | 2025-08-29 15:00:15.505380 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-08-29 15:00:15.505386 | orchestrator | Friday 29 August 2025 14:58:37 +0000 (0:00:03.539) 0:11:29.389 ********* 2025-08-29 15:00:15.505390 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.505394 | orchestrator | 2025-08-29 15:00:15.505397 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-08-29 15:00:15.505401 | orchestrator | Friday 29 August 2025 14:58:38 +0000 (0:00:00.591) 0:11:29.980 ********* 2025-08-29 15:00:15.505405 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 15:00:15.505409 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 15:00:15.505412 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 15:00:15.505416 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-08-29 15:00:15.505420 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-08-29 15:00:15.505424 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-08-29 15:00:15.505430 | orchestrator | 2025-08-29 15:00:15.505434 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-08-29 15:00:15.505438 | orchestrator | Friday 29 August 2025 14:58:39 +0000 (0:00:01.034) 0:11:31.015 ********* 2025-08-29 15:00:15.505442 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:15.505445 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:00:15.505449 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:00:15.505453 | orchestrator | 2025-08-29 15:00:15.505457 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:00:15.505460 | orchestrator | Friday 29 August 2025 14:58:41 +0000 (0:00:02.231) 0:11:33.246 ********* 2025-08-29 15:00:15.505464 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:00:15.505468 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:00:15.505472 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.505475 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:00:15.505479 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 15:00:15.505483 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.505487 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:00:15.505490 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 15:00:15.505494 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.505498 | orchestrator | 2025-08-29 15:00:15.505502 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-08-29 15:00:15.505505 | orchestrator | Friday 29 August 2025 14:58:42 +0000 (0:00:01.176) 0:11:34.423 ********* 2025-08-29 15:00:15.505509 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.505513 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.505516 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.505520 | orchestrator | 2025-08-29 15:00:15.505524 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-08-29 15:00:15.505528 | orchestrator | Friday 29 August 2025 14:58:45 +0000 (0:00:02.716) 0:11:37.139 ********* 2025-08-29 15:00:15.505531 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.505535 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.505539 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.505543 | orchestrator | 2025-08-29 15:00:15.505546 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-08-29 15:00:15.505550 | orchestrator | Friday 29 August 2025 14:58:46 +0000 (0:00:00.765) 0:11:37.904 ********* 2025-08-29 15:00:15.505554 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.505558 | orchestrator | 2025-08-29 15:00:15.505562 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-08-29 15:00:15.505565 | orchestrator | Friday 29 August 2025 14:58:46 +0000 (0:00:00.534) 0:11:38.439 ********* 2025-08-29 15:00:15.505572 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.505576 | orchestrator | 2025-08-29 15:00:15.505580 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-08-29 15:00:15.505584 | orchestrator | Friday 29 August 2025 14:58:47 +0000 (0:00:00.828) 0:11:39.267 ********* 2025-08-29 15:00:15.505587 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.505591 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.505595 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.505598 | orchestrator | 2025-08-29 15:00:15.505602 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-08-29 15:00:15.505606 | orchestrator | Friday 29 August 2025 14:58:48 +0000 (0:00:01.322) 0:11:40.590 ********* 2025-08-29 15:00:15.505610 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.505614 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.505617 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.505621 | orchestrator | 2025-08-29 15:00:15.505628 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-08-29 15:00:15.505632 | orchestrator | Friday 29 August 2025 14:58:50 +0000 (0:00:01.266) 0:11:41.856 ********* 2025-08-29 15:00:15.505635 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.505639 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.505643 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.505646 | orchestrator | 2025-08-29 15:00:15.505650 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-08-29 15:00:15.505654 | orchestrator | Friday 29 August 2025 14:58:52 +0000 (0:00:01.932) 0:11:43.788 ********* 2025-08-29 15:00:15.505658 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.505661 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.505665 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.505669 | orchestrator | 2025-08-29 15:00:15.505673 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-08-29 15:00:15.505676 | orchestrator | Friday 29 August 2025 14:58:54 +0000 (0:00:02.185) 0:11:45.974 ********* 2025-08-29 15:00:15.505682 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.505686 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.505690 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.505693 | orchestrator | 2025-08-29 15:00:15.505697 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:00:15.505701 | orchestrator | Friday 29 August 2025 14:58:55 +0000 (0:00:01.284) 0:11:47.259 ********* 2025-08-29 15:00:15.505705 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.505709 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.505713 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.505716 | orchestrator | 2025-08-29 15:00:15.505720 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 15:00:15.505724 | orchestrator | Friday 29 August 2025 14:58:56 +0000 (0:00:01.016) 0:11:48.275 ********* 2025-08-29 15:00:15.505727 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.505731 | orchestrator | 2025-08-29 15:00:15.505735 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 15:00:15.505739 | orchestrator | Friday 29 August 2025 14:58:57 +0000 (0:00:00.558) 0:11:48.833 ********* 2025-08-29 15:00:15.505743 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.505746 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.505750 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.505754 | orchestrator | 2025-08-29 15:00:15.505758 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 15:00:15.505762 | orchestrator | Friday 29 August 2025 14:58:57 +0000 (0:00:00.374) 0:11:49.208 ********* 2025-08-29 15:00:15.505765 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.505769 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.505773 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.505776 | orchestrator | 2025-08-29 15:00:15.505780 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 15:00:15.505784 | orchestrator | Friday 29 August 2025 14:58:59 +0000 (0:00:01.566) 0:11:50.775 ********* 2025-08-29 15:00:15.505788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.505792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.505796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.505799 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.505803 | orchestrator | 2025-08-29 15:00:15.505807 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 15:00:15.505811 | orchestrator | Friday 29 August 2025 14:58:59 +0000 (0:00:00.630) 0:11:51.405 ********* 2025-08-29 15:00:15.505815 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.505818 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.505822 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.505826 | orchestrator | 2025-08-29 15:00:15.505832 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 15:00:15.505836 | orchestrator | 2025-08-29 15:00:15.505840 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:00:15.505857 | orchestrator | Friday 29 August 2025 14:59:00 +0000 (0:00:00.667) 0:11:52.073 ********* 2025-08-29 15:00:15.505861 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.505865 | orchestrator | 2025-08-29 15:00:15.505868 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:00:15.505872 | orchestrator | Friday 29 August 2025 14:59:01 +0000 (0:00:00.847) 0:11:52.920 ********* 2025-08-29 15:00:15.505876 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.505880 | orchestrator | 2025-08-29 15:00:15.505884 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:00:15.505887 | orchestrator | Friday 29 August 2025 14:59:01 +0000 (0:00:00.524) 0:11:53.445 ********* 2025-08-29 15:00:15.505891 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.505895 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.505901 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.505905 | orchestrator | 2025-08-29 15:00:15.505908 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:00:15.505912 | orchestrator | Friday 29 August 2025 14:59:02 +0000 (0:00:00.636) 0:11:54.081 ********* 2025-08-29 15:00:15.505916 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.505920 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.505923 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.505927 | orchestrator | 2025-08-29 15:00:15.505931 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:00:15.505934 | orchestrator | Friday 29 August 2025 14:59:03 +0000 (0:00:00.735) 0:11:54.817 ********* 2025-08-29 15:00:15.505938 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.505942 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.505946 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.505949 | orchestrator | 2025-08-29 15:00:15.505953 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:00:15.505957 | orchestrator | Friday 29 August 2025 14:59:03 +0000 (0:00:00.801) 0:11:55.618 ********* 2025-08-29 15:00:15.505960 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.505964 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.505968 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.505972 | orchestrator | 2025-08-29 15:00:15.505975 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:00:15.505979 | orchestrator | Friday 29 August 2025 14:59:04 +0000 (0:00:00.824) 0:11:56.442 ********* 2025-08-29 15:00:15.505983 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.505987 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.505991 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.505994 | orchestrator | 2025-08-29 15:00:15.505998 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:00:15.506002 | orchestrator | Friday 29 August 2025 14:59:05 +0000 (0:00:00.754) 0:11:57.197 ********* 2025-08-29 15:00:15.506006 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.506010 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.506036 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.506041 | orchestrator | 2025-08-29 15:00:15.506044 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:00:15.506048 | orchestrator | Friday 29 August 2025 14:59:05 +0000 (0:00:00.362) 0:11:57.560 ********* 2025-08-29 15:00:15.506052 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.506056 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.506059 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.506063 | orchestrator | 2025-08-29 15:00:15.506067 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:00:15.506074 | orchestrator | Friday 29 August 2025 14:59:06 +0000 (0:00:00.357) 0:11:57.917 ********* 2025-08-29 15:00:15.506077 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.506081 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.506085 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.506089 | orchestrator | 2025-08-29 15:00:15.506092 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:00:15.506096 | orchestrator | Friday 29 August 2025 14:59:06 +0000 (0:00:00.743) 0:11:58.661 ********* 2025-08-29 15:00:15.506100 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.506104 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.506108 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.506111 | orchestrator | 2025-08-29 15:00:15.506115 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:00:15.506119 | orchestrator | Friday 29 August 2025 14:59:08 +0000 (0:00:01.106) 0:11:59.767 ********* 2025-08-29 15:00:15.506123 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.506127 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.506130 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.506134 | orchestrator | 2025-08-29 15:00:15.506138 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:00:15.506142 | orchestrator | Friday 29 August 2025 14:59:08 +0000 (0:00:00.460) 0:12:00.228 ********* 2025-08-29 15:00:15.506145 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.506149 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.506153 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.506157 | orchestrator | 2025-08-29 15:00:15.506160 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:00:15.506164 | orchestrator | Friday 29 August 2025 14:59:08 +0000 (0:00:00.344) 0:12:00.573 ********* 2025-08-29 15:00:15.506168 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.506172 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.506176 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.506179 | orchestrator | 2025-08-29 15:00:15.506183 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:00:15.506187 | orchestrator | Friday 29 August 2025 14:59:09 +0000 (0:00:00.379) 0:12:00.952 ********* 2025-08-29 15:00:15.506191 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.506195 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.506198 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.506202 | orchestrator | 2025-08-29 15:00:15.506206 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:00:15.506210 | orchestrator | Friday 29 August 2025 14:59:09 +0000 (0:00:00.632) 0:12:01.585 ********* 2025-08-29 15:00:15.506213 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.506217 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.506221 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.506225 | orchestrator | 2025-08-29 15:00:15.506229 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:00:15.506232 | orchestrator | Friday 29 August 2025 14:59:10 +0000 (0:00:00.399) 0:12:01.984 ********* 2025-08-29 15:00:15.506236 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.506240 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.506244 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.506247 | orchestrator | 2025-08-29 15:00:15.506251 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:00:15.506255 | orchestrator | Friday 29 August 2025 14:59:10 +0000 (0:00:00.328) 0:12:02.313 ********* 2025-08-29 15:00:15.506259 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.506263 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.506266 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.506270 | orchestrator | 2025-08-29 15:00:15.506276 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:00:15.506280 | orchestrator | Friday 29 August 2025 14:59:10 +0000 (0:00:00.306) 0:12:02.620 ********* 2025-08-29 15:00:15.506287 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.506291 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.506295 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.506299 | orchestrator | 2025-08-29 15:00:15.506302 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:00:15.506306 | orchestrator | Friday 29 August 2025 14:59:11 +0000 (0:00:00.609) 0:12:03.229 ********* 2025-08-29 15:00:15.506311 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.506317 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.506323 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.506329 | orchestrator | 2025-08-29 15:00:15.506335 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:00:15.506341 | orchestrator | Friday 29 August 2025 14:59:11 +0000 (0:00:00.375) 0:12:03.605 ********* 2025-08-29 15:00:15.506346 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.506352 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.506358 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.506364 | orchestrator | 2025-08-29 15:00:15.506369 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-08-29 15:00:15.506374 | orchestrator | Friday 29 August 2025 14:59:12 +0000 (0:00:00.657) 0:12:04.263 ********* 2025-08-29 15:00:15.506379 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.506385 | orchestrator | 2025-08-29 15:00:15.506390 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 15:00:15.506397 | orchestrator | Friday 29 August 2025 14:59:13 +0000 (0:00:00.926) 0:12:05.190 ********* 2025-08-29 15:00:15.506403 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:15.506409 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:00:15.506417 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:00:15.506424 | orchestrator | 2025-08-29 15:00:15.506429 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:00:15.506435 | orchestrator | Friday 29 August 2025 14:59:15 +0000 (0:00:02.189) 0:12:07.380 ********* 2025-08-29 15:00:15.506441 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:00:15.506447 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:00:15.506452 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.506459 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:00:15.506464 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 15:00:15.506470 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.506475 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:00:15.506481 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 15:00:15.506487 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.506493 | orchestrator | 2025-08-29 15:00:15.506498 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-08-29 15:00:15.506505 | orchestrator | Friday 29 August 2025 14:59:16 +0000 (0:00:01.236) 0:12:08.617 ********* 2025-08-29 15:00:15.506515 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.506522 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.506528 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.506533 | orchestrator | 2025-08-29 15:00:15.506539 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-08-29 15:00:15.506545 | orchestrator | Friday 29 August 2025 14:59:17 +0000 (0:00:00.334) 0:12:08.951 ********* 2025-08-29 15:00:15.506551 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.506557 | orchestrator | 2025-08-29 15:00:15.506562 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-08-29 15:00:15.506568 | orchestrator | Friday 29 August 2025 14:59:17 +0000 (0:00:00.793) 0:12:09.744 ********* 2025-08-29 15:00:15.506579 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.506586 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.506591 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.506597 | orchestrator | 2025-08-29 15:00:15.506603 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-08-29 15:00:15.506609 | orchestrator | Friday 29 August 2025 14:59:18 +0000 (0:00:00.861) 0:12:10.606 ********* 2025-08-29 15:00:15.506615 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:15.506621 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 15:00:15.506627 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:15.506633 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 15:00:15.506639 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:15.506644 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 15:00:15.506650 | orchestrator | 2025-08-29 15:00:15.506663 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 15:00:15.506670 | orchestrator | Friday 29 August 2025 14:59:23 +0000 (0:00:04.883) 0:12:15.490 ********* 2025-08-29 15:00:15.506676 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:15.506682 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:00:15.506688 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:15.506694 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:00:15.506697 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:00:15.506701 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:00:15.506705 | orchestrator | 2025-08-29 15:00:15.506708 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:00:15.506712 | orchestrator | Friday 29 August 2025 14:59:26 +0000 (0:00:03.205) 0:12:18.695 ********* 2025-08-29 15:00:15.506716 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:00:15.506719 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.506723 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:00:15.506727 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.506731 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:00:15.506734 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.506738 | orchestrator | 2025-08-29 15:00:15.506742 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-08-29 15:00:15.506746 | orchestrator | Friday 29 August 2025 14:59:28 +0000 (0:00:01.251) 0:12:19.947 ********* 2025-08-29 15:00:15.506749 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-08-29 15:00:15.506753 | orchestrator | 2025-08-29 15:00:15.506757 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-08-29 15:00:15.506765 | orchestrator | Friday 29 August 2025 14:59:28 +0000 (0:00:00.295) 0:12:20.243 ********* 2025-08-29 15:00:15.506769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:15.506774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:15.506782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:15.506786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:15.506789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:15.506793 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.506797 | orchestrator | 2025-08-29 15:00:15.506801 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-08-29 15:00:15.506805 | orchestrator | Friday 29 August 2025 14:59:29 +0000 (0:00:00.664) 0:12:20.908 ********* 2025-08-29 15:00:15.506808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:15.506812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:15.506816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:15.506820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:15.506824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:00:15.506827 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.506831 | orchestrator | 2025-08-29 15:00:15.506835 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-08-29 15:00:15.506839 | orchestrator | Friday 29 August 2025 14:59:29 +0000 (0:00:00.586) 0:12:21.494 ********* 2025-08-29 15:00:15.506876 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:00:15.506881 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:00:15.506885 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:00:15.506889 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:00:15.506893 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:00:15.506897 | orchestrator | 2025-08-29 15:00:15.506900 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-08-29 15:00:15.506907 | orchestrator | Friday 29 August 2025 15:00:01 +0000 (0:00:31.724) 0:12:53.218 ********* 2025-08-29 15:00:15.506911 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.506915 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.506919 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.506922 | orchestrator | 2025-08-29 15:00:15.506926 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-08-29 15:00:15.506930 | orchestrator | Friday 29 August 2025 15:00:01 +0000 (0:00:00.409) 0:12:53.628 ********* 2025-08-29 15:00:15.506934 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.506937 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.506941 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.506945 | orchestrator | 2025-08-29 15:00:15.506949 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-08-29 15:00:15.506956 | orchestrator | Friday 29 August 2025 15:00:02 +0000 (0:00:00.670) 0:12:54.298 ********* 2025-08-29 15:00:15.506960 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.506964 | orchestrator | 2025-08-29 15:00:15.506967 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-08-29 15:00:15.506971 | orchestrator | Friday 29 August 2025 15:00:03 +0000 (0:00:00.602) 0:12:54.900 ********* 2025-08-29 15:00:15.506975 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.506979 | orchestrator | 2025-08-29 15:00:15.506983 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-08-29 15:00:15.506989 | orchestrator | Friday 29 August 2025 15:00:04 +0000 (0:00:01.005) 0:12:55.906 ********* 2025-08-29 15:00:15.506993 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.506997 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.507001 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.507004 | orchestrator | 2025-08-29 15:00:15.507011 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-08-29 15:00:15.507015 | orchestrator | Friday 29 August 2025 15:00:05 +0000 (0:00:01.349) 0:12:57.256 ********* 2025-08-29 15:00:15.507019 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.507022 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.507026 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.507030 | orchestrator | 2025-08-29 15:00:15.507034 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-08-29 15:00:15.507037 | orchestrator | Friday 29 August 2025 15:00:06 +0000 (0:00:01.199) 0:12:58.456 ********* 2025-08-29 15:00:15.507041 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:00:15.507045 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:00:15.507049 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:00:15.507052 | orchestrator | 2025-08-29 15:00:15.507056 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-08-29 15:00:15.507060 | orchestrator | Friday 29 August 2025 15:00:08 +0000 (0:00:01.822) 0:13:00.278 ********* 2025-08-29 15:00:15.507064 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.507068 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.507071 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:00:15.507075 | orchestrator | 2025-08-29 15:00:15.507079 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:00:15.507083 | orchestrator | Friday 29 August 2025 15:00:11 +0000 (0:00:02.647) 0:13:02.925 ********* 2025-08-29 15:00:15.507086 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.507090 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.507094 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.507098 | orchestrator | 2025-08-29 15:00:15.507101 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 15:00:15.507105 | orchestrator | Friday 29 August 2025 15:00:11 +0000 (0:00:00.333) 0:13:03.259 ********* 2025-08-29 15:00:15.507109 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:00:15.507113 | orchestrator | 2025-08-29 15:00:15.507116 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 15:00:15.507120 | orchestrator | Friday 29 August 2025 15:00:12 +0000 (0:00:00.833) 0:13:04.092 ********* 2025-08-29 15:00:15.507124 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.507128 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.507132 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.507135 | orchestrator | 2025-08-29 15:00:15.507142 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 15:00:15.507146 | orchestrator | Friday 29 August 2025 15:00:12 +0000 (0:00:00.353) 0:13:04.445 ********* 2025-08-29 15:00:15.507149 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.507153 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:00:15.507157 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:00:15.507161 | orchestrator | 2025-08-29 15:00:15.507165 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 15:00:15.507168 | orchestrator | Friday 29 August 2025 15:00:13 +0000 (0:00:00.337) 0:13:04.783 ********* 2025-08-29 15:00:15.507172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:00:15.507176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:00:15.507180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:00:15.507183 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:00:15.507187 | orchestrator | 2025-08-29 15:00:15.507191 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 15:00:15.507195 | orchestrator | Friday 29 August 2025 15:00:14 +0000 (0:00:01.131) 0:13:05.914 ********* 2025-08-29 15:00:15.507201 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:00:15.507206 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:00:15.507210 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:00:15.507214 | orchestrator | 2025-08-29 15:00:15.507218 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:00:15.507222 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-08-29 15:00:15.507226 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-08-29 15:00:15.507230 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-08-29 15:00:15.507235 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-08-29 15:00:15.507239 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-08-29 15:00:15.507243 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-08-29 15:00:15.507247 | orchestrator | 2025-08-29 15:00:15.507251 | orchestrator | 2025-08-29 15:00:15.507255 | orchestrator | 2025-08-29 15:00:15.507259 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:00:15.507264 | orchestrator | Friday 29 August 2025 15:00:14 +0000 (0:00:00.262) 0:13:06.177 ********* 2025-08-29 15:00:15.507270 | orchestrator | =============================================================================== 2025-08-29 15:00:15.507275 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------ 141.52s 2025-08-29 15:00:15.507279 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.52s 2025-08-29 15:00:15.507283 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.72s 2025-08-29 15:00:15.507287 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 25.07s 2025-08-29 15:00:15.507291 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.03s 2025-08-29 15:00:15.507296 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 16.42s 2025-08-29 15:00:15.507300 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.84s 2025-08-29 15:00:15.507304 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.49s 2025-08-29 15:00:15.507308 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.41s 2025-08-29 15:00:15.507315 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.63s 2025-08-29 15:00:15.507319 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.78s 2025-08-29 15:00:15.507323 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.43s 2025-08-29 15:00:15.507327 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.93s 2025-08-29 15:00:15.507331 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.88s 2025-08-29 15:00:15.507335 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.25s 2025-08-29 15:00:15.507339 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.01s 2025-08-29 15:00:15.507344 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.78s 2025-08-29 15:00:15.507348 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.74s 2025-08-29 15:00:15.507352 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.60s 2025-08-29 15:00:15.507356 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.56s 2025-08-29 15:00:15.507360 | orchestrator | 2025-08-29 15:00:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:18.552262 | orchestrator | 2025-08-29 15:00:18 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:18.553889 | orchestrator | 2025-08-29 15:00:18 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:18.555764 | orchestrator | 2025-08-29 15:00:18 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 15:00:18.555865 | orchestrator | 2025-08-29 15:00:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:21.603517 | orchestrator | 2025-08-29 15:00:21 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:21.606376 | orchestrator | 2025-08-29 15:00:21 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:21.608985 | orchestrator | 2025-08-29 15:00:21 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 15:00:21.609027 | orchestrator | 2025-08-29 15:00:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:24.654635 | orchestrator | 2025-08-29 15:00:24 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:24.654742 | orchestrator | 2025-08-29 15:00:24 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:24.655906 | orchestrator | 2025-08-29 15:00:24 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 15:00:24.655940 | orchestrator | 2025-08-29 15:00:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:27.710798 | orchestrator | 2025-08-29 15:00:27 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:27.711986 | orchestrator | 2025-08-29 15:00:27 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:27.714281 | orchestrator | 2025-08-29 15:00:27 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 15:00:27.714347 | orchestrator | 2025-08-29 15:00:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:30.765090 | orchestrator | 2025-08-29 15:00:30 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:30.767427 | orchestrator | 2025-08-29 15:00:30 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:30.769986 | orchestrator | 2025-08-29 15:00:30 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 15:00:30.770309 | orchestrator | 2025-08-29 15:00:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:33.831198 | orchestrator | 2025-08-29 15:00:33 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:33.834081 | orchestrator | 2025-08-29 15:00:33 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:33.836636 | orchestrator | 2025-08-29 15:00:33 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 15:00:33.836691 | orchestrator | 2025-08-29 15:00:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:36.882172 | orchestrator | 2025-08-29 15:00:36 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:36.882296 | orchestrator | 2025-08-29 15:00:36 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:36.883377 | orchestrator | 2025-08-29 15:00:36 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state STARTED 2025-08-29 15:00:36.883457 | orchestrator | 2025-08-29 15:00:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:39.936500 | orchestrator | 2025-08-29 15:00:39 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:39.937490 | orchestrator | 2025-08-29 15:00:39 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:39.939175 | orchestrator | 2025-08-29 15:00:39 | INFO  | Task 7418b614-4c70-4ddc-a186-57eb91a9e2be is in state SUCCESS 2025-08-29 15:00:39.941023 | orchestrator | 2025-08-29 15:00:39.941063 | orchestrator | 2025-08-29 15:00:39.941071 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:00:39.941079 | orchestrator | 2025-08-29 15:00:39.941086 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:00:39.941093 | orchestrator | Friday 29 August 2025 14:57:45 +0000 (0:00:00.277) 0:00:00.277 ********* 2025-08-29 15:00:39.941100 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:39.941108 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:39.941114 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:39.941121 | orchestrator | 2025-08-29 15:00:39.941127 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:00:39.941134 | orchestrator | Friday 29 August 2025 14:57:45 +0000 (0:00:00.266) 0:00:00.543 ********* 2025-08-29 15:00:39.941141 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-08-29 15:00:39.941147 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-08-29 15:00:39.941154 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-08-29 15:00:39.941160 | orchestrator | 2025-08-29 15:00:39.941166 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-08-29 15:00:39.941173 | orchestrator | 2025-08-29 15:00:39.941179 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:00:39.941185 | orchestrator | Friday 29 August 2025 14:57:45 +0000 (0:00:00.392) 0:00:00.936 ********* 2025-08-29 15:00:39.941192 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:39.941198 | orchestrator | 2025-08-29 15:00:39.941204 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-08-29 15:00:39.941211 | orchestrator | Friday 29 August 2025 14:57:46 +0000 (0:00:00.457) 0:00:01.393 ********* 2025-08-29 15:00:39.941217 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 15:00:39.941224 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 15:00:39.941230 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 15:00:39.941236 | orchestrator | 2025-08-29 15:00:39.941256 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-08-29 15:00:39.941293 | orchestrator | Friday 29 August 2025 14:57:47 +0000 (0:00:00.582) 0:00:01.976 ********* 2025-08-29 15:00:39.941303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:39.941313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:39.941331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:39.941340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:39.941468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:39.941494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:39.941506 | orchestrator | 2025-08-29 15:00:39.941517 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:00:39.941528 | orchestrator | Friday 29 August 2025 14:57:48 +0000 (0:00:01.439) 0:00:03.415 ********* 2025-08-29 15:00:39.941538 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:39.941549 | orchestrator | 2025-08-29 15:00:39.941559 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-08-29 15:00:39.941569 | orchestrator | Friday 29 August 2025 14:57:49 +0000 (0:00:00.607) 0:00:04.022 ********* 2025-08-29 15:00:39.941587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:39.941594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:39.941610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:39.941617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:39.941629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:39.941637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:39.941647 | orchestrator | 2025-08-29 15:00:39.941654 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-08-29 15:00:39.941660 | orchestrator | Friday 29 August 2025 14:57:51 +0000 (0:00:02.579) 0:00:06.602 ********* 2025-08-29 15:00:39.941670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:00:39.941677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:00:39.941684 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:39.941691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:00:39.941702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:00:39.941714 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:39.941725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:00:39.941732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:00:39.941738 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:39.941745 | orchestrator | 2025-08-29 15:00:39.941751 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-08-29 15:00:39.941757 | orchestrator | Friday 29 August 2025 14:57:52 +0000 (0:00:00.910) 0:00:07.513 ********* 2025-08-29 15:00:39.941763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:00:39.941775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:00:39.941786 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:39.941796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:00:39.941803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:00:39.941809 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:39.941858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:00:39.941872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:00:39.941886 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:39.941892 | orchestrator | 2025-08-29 15:00:39.941899 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-08-29 15:00:39.941905 | orchestrator | Friday 29 August 2025 14:57:53 +0000 (0:00:00.995) 0:00:08.508 ********* 2025-08-29 15:00:39.941915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:39.941922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:39.941929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:39.941941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:39.941958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:39.941968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:39.941975 | orchestrator | 2025-08-29 15:00:39.941981 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-08-29 15:00:39.941987 | orchestrator | Friday 29 August 2025 14:57:56 +0000 (0:00:02.454) 0:00:10.963 ********* 2025-08-29 15:00:39.941994 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:39.942000 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:39.942006 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:39.942012 | orchestrator | 2025-08-29 15:00:39.942058 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-08-29 15:00:39.942064 | orchestrator | Friday 29 August 2025 14:57:59 +0000 (0:00:03.332) 0:00:14.295 ********* 2025-08-29 15:00:39.942071 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:39.942077 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:39.942083 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:39.942090 | orchestrator | 2025-08-29 15:00:39.942097 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-08-29 15:00:39.942104 | orchestrator | Friday 29 August 2025 14:58:01 +0000 (0:00:01.725) 0:00:16.021 ********* 2025-08-29 15:00:39.942112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:39.942133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:39.942145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:39.942154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:39.942162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:39.942180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:39.942188 | orchestrator | 2025-08-29 15:00:39.942199 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:00:39.942209 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:02.112) 0:00:18.133 ********* 2025-08-29 15:00:39.942225 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:39.942237 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:39.942246 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:39.942255 | orchestrator | 2025-08-29 15:00:39.942265 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 15:00:39.942274 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:00.239) 0:00:18.373 ********* 2025-08-29 15:00:39.942284 | orchestrator | 2025-08-29 15:00:39.942294 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 15:00:39.942303 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:00.050) 0:00:18.423 ********* 2025-08-29 15:00:39.942312 | orchestrator | 2025-08-29 15:00:39.942321 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 15:00:39.942330 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:00.052) 0:00:18.475 ********* 2025-08-29 15:00:39.942579 | orchestrator | 2025-08-29 15:00:39.942589 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-08-29 15:00:39.942603 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:00.053) 0:00:18.529 ********* 2025-08-29 15:00:39.942610 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:39.942616 | orchestrator | 2025-08-29 15:00:39.942623 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-08-29 15:00:39.942629 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:00.187) 0:00:18.716 ********* 2025-08-29 15:00:39.942635 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:39.942641 | orchestrator | 2025-08-29 15:00:39.942648 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-08-29 15:00:39.942654 | orchestrator | Friday 29 August 2025 14:58:04 +0000 (0:00:00.482) 0:00:19.199 ********* 2025-08-29 15:00:39.942660 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:39.942666 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:39.942672 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:39.942679 | orchestrator | 2025-08-29 15:00:39.942685 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-08-29 15:00:39.942691 | orchestrator | Friday 29 August 2025 14:59:12 +0000 (0:01:08.536) 0:01:27.736 ********* 2025-08-29 15:00:39.942697 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:39.942703 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:39.942710 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:39.942716 | orchestrator | 2025-08-29 15:00:39.942722 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:00:39.942736 | orchestrator | Friday 29 August 2025 15:00:26 +0000 (0:01:13.429) 0:02:41.166 ********* 2025-08-29 15:00:39.942742 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:39.942749 | orchestrator | 2025-08-29 15:00:39.942755 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-08-29 15:00:39.942761 | orchestrator | Friday 29 August 2025 15:00:26 +0000 (0:00:00.574) 0:02:41.740 ********* 2025-08-29 15:00:39.942767 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:39.942774 | orchestrator | 2025-08-29 15:00:39.942780 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-08-29 15:00:39.942786 | orchestrator | Friday 29 August 2025 15:00:29 +0000 (0:00:02.962) 0:02:44.703 ********* 2025-08-29 15:00:39.942792 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:39.942798 | orchestrator | 2025-08-29 15:00:39.942805 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-08-29 15:00:39.942811 | orchestrator | Friday 29 August 2025 15:00:32 +0000 (0:00:02.291) 0:02:46.995 ********* 2025-08-29 15:00:39.942842 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:39.942848 | orchestrator | 2025-08-29 15:00:39.942854 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-08-29 15:00:39.942860 | orchestrator | Friday 29 August 2025 15:00:34 +0000 (0:00:02.657) 0:02:49.652 ********* 2025-08-29 15:00:39.942866 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:39.942872 | orchestrator | 2025-08-29 15:00:39.942879 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:00:39.942886 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:00:39.942894 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 15:00:39.942900 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 15:00:39.942906 | orchestrator | 2025-08-29 15:00:39.942912 | orchestrator | 2025-08-29 15:00:39.942919 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:00:39.942931 | orchestrator | Friday 29 August 2025 15:00:37 +0000 (0:00:02.442) 0:02:52.095 ********* 2025-08-29 15:00:39.942937 | orchestrator | =============================================================================== 2025-08-29 15:00:39.942943 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 73.43s 2025-08-29 15:00:39.942950 | orchestrator | opensearch : Restart opensearch container ------------------------------ 68.54s 2025-08-29 15:00:39.942956 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.33s 2025-08-29 15:00:39.942962 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.96s 2025-08-29 15:00:39.942968 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.66s 2025-08-29 15:00:39.942975 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.58s 2025-08-29 15:00:39.942981 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.45s 2025-08-29 15:00:39.942987 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.44s 2025-08-29 15:00:39.942993 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.29s 2025-08-29 15:00:39.942999 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.11s 2025-08-29 15:00:39.943005 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.73s 2025-08-29 15:00:39.943011 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.44s 2025-08-29 15:00:39.943018 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.00s 2025-08-29 15:00:39.943030 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.91s 2025-08-29 15:00:39.943036 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.61s 2025-08-29 15:00:39.943042 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.58s 2025-08-29 15:00:39.943048 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-08-29 15:00:39.943058 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.48s 2025-08-29 15:00:39.943064 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2025-08-29 15:00:39.943071 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2025-08-29 15:00:39.943077 | orchestrator | 2025-08-29 15:00:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:42.986750 | orchestrator | 2025-08-29 15:00:42 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:42.990187 | orchestrator | 2025-08-29 15:00:42 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:42.990272 | orchestrator | 2025-08-29 15:00:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:46.048246 | orchestrator | 2025-08-29 15:00:46 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:46.055913 | orchestrator | 2025-08-29 15:00:46 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:46.055976 | orchestrator | 2025-08-29 15:00:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:49.104700 | orchestrator | 2025-08-29 15:00:49 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:49.106492 | orchestrator | 2025-08-29 15:00:49 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:49.106547 | orchestrator | 2025-08-29 15:00:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:52.158326 | orchestrator | 2025-08-29 15:00:52 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:52.161056 | orchestrator | 2025-08-29 15:00:52 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:52.161112 | orchestrator | 2025-08-29 15:00:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:55.206596 | orchestrator | 2025-08-29 15:00:55 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:55.207875 | orchestrator | 2025-08-29 15:00:55 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:55.207946 | orchestrator | 2025-08-29 15:00:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:58.248922 | orchestrator | 2025-08-29 15:00:58 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:00:58.254427 | orchestrator | 2025-08-29 15:00:58 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:00:58.254507 | orchestrator | 2025-08-29 15:00:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:01.308072 | orchestrator | 2025-08-29 15:01:01 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:01.308651 | orchestrator | 2025-08-29 15:01:01 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:01:01.308689 | orchestrator | 2025-08-29 15:01:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:04.362360 | orchestrator | 2025-08-29 15:01:04 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:04.365258 | orchestrator | 2025-08-29 15:01:04 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state STARTED 2025-08-29 15:01:04.365345 | orchestrator | 2025-08-29 15:01:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:07.427717 | orchestrator | 2025-08-29 15:01:07 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:07.429304 | orchestrator | 2025-08-29 15:01:07 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:07.430922 | orchestrator | 2025-08-29 15:01:07 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:07.436394 | orchestrator | 2025-08-29 15:01:07 | INFO  | Task a249ed2b-ddef-44b3-bc8a-3a863ca2972d is in state SUCCESS 2025-08-29 15:01:07.438398 | orchestrator | 2025-08-29 15:01:07.438445 | orchestrator | 2025-08-29 15:01:07.438455 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-08-29 15:01:07.438466 | orchestrator | 2025-08-29 15:01:07.438475 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 15:01:07.438484 | orchestrator | Friday 29 August 2025 14:57:45 +0000 (0:00:00.096) 0:00:00.096 ********* 2025-08-29 15:01:07.438493 | orchestrator | ok: [localhost] => { 2025-08-29 15:01:07.438510 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-08-29 15:01:07.438524 | orchestrator | } 2025-08-29 15:01:07.438538 | orchestrator | 2025-08-29 15:01:07.438551 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-08-29 15:01:07.438566 | orchestrator | Friday 29 August 2025 14:57:45 +0000 (0:00:00.055) 0:00:00.152 ********* 2025-08-29 15:01:07.438602 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-08-29 15:01:07.438619 | orchestrator | ...ignoring 2025-08-29 15:01:07.438635 | orchestrator | 2025-08-29 15:01:07.438650 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-08-29 15:01:07.438665 | orchestrator | Friday 29 August 2025 14:57:47 +0000 (0:00:02.820) 0:00:02.972 ********* 2025-08-29 15:01:07.438681 | orchestrator | skipping: [localhost] 2025-08-29 15:01:07.438696 | orchestrator | 2025-08-29 15:01:07.438709 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-08-29 15:01:07.438718 | orchestrator | Friday 29 August 2025 14:57:48 +0000 (0:00:00.045) 0:00:03.017 ********* 2025-08-29 15:01:07.438727 | orchestrator | ok: [localhost] 2025-08-29 15:01:07.438736 | orchestrator | 2025-08-29 15:01:07.438745 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:01:07.438754 | orchestrator | 2025-08-29 15:01:07.438762 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:01:07.438771 | orchestrator | Friday 29 August 2025 14:57:48 +0000 (0:00:00.140) 0:00:03.158 ********* 2025-08-29 15:01:07.438819 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:07.438829 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:07.438838 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:07.438847 | orchestrator | 2025-08-29 15:01:07.438856 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:01:07.438864 | orchestrator | Friday 29 August 2025 14:57:48 +0000 (0:00:00.297) 0:00:03.456 ********* 2025-08-29 15:01:07.438873 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-08-29 15:01:07.438882 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-08-29 15:01:07.438891 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-08-29 15:01:07.438900 | orchestrator | 2025-08-29 15:01:07.438909 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-08-29 15:01:07.438917 | orchestrator | 2025-08-29 15:01:07.438927 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-08-29 15:01:07.438940 | orchestrator | Friday 29 August 2025 14:57:49 +0000 (0:00:00.745) 0:00:04.201 ********* 2025-08-29 15:01:07.438956 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:01:07.439004 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 15:01:07.439021 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 15:01:07.439036 | orchestrator | 2025-08-29 15:01:07.439052 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:01:07.439067 | orchestrator | Friday 29 August 2025 14:57:49 +0000 (0:00:00.381) 0:00:04.582 ********* 2025-08-29 15:01:07.439081 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:07.439093 | orchestrator | 2025-08-29 15:01:07.439106 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-08-29 15:01:07.439121 | orchestrator | Friday 29 August 2025 14:57:50 +0000 (0:00:00.484) 0:00:05.067 ********* 2025-08-29 15:01:07.439162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:07.439192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:07.439223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:07.439240 | orchestrator | 2025-08-29 15:01:07.439260 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-08-29 15:01:07.439275 | orchestrator | Friday 29 August 2025 14:57:52 +0000 (0:00:02.752) 0:00:07.820 ********* 2025-08-29 15:01:07.439289 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.439304 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.439317 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:07.439331 | orchestrator | 2025-08-29 15:01:07.439346 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-08-29 15:01:07.439389 | orchestrator | Friday 29 August 2025 14:57:53 +0000 (0:00:00.619) 0:00:08.439 ********* 2025-08-29 15:01:07.439403 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.439418 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.439432 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:07.439446 | orchestrator | 2025-08-29 15:01:07.439461 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-08-29 15:01:07.439483 | orchestrator | Friday 29 August 2025 14:57:54 +0000 (0:00:01.374) 0:00:09.814 ********* 2025-08-29 15:01:07.439498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:07.439532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:07.439555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:07.439581 | orchestrator | 2025-08-29 15:01:07.439596 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-08-29 15:01:07.439611 | orchestrator | Friday 29 August 2025 14:57:59 +0000 (0:00:04.166) 0:00:13.981 ********* 2025-08-29 15:01:07.439625 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.439641 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.439655 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:07.439669 | orchestrator | 2025-08-29 15:01:07.439684 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-08-29 15:01:07.439695 | orchestrator | Friday 29 August 2025 14:58:00 +0000 (0:00:01.110) 0:00:15.092 ********* 2025-08-29 15:01:07.439704 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:07.439712 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:07.439721 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:07.439729 | orchestrator | 2025-08-29 15:01:07.439738 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:01:07.439747 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:03.738) 0:00:18.830 ********* 2025-08-29 15:01:07.439756 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:07.439765 | orchestrator | 2025-08-29 15:01:07.439773 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 15:01:07.439838 | orchestrator | Friday 29 August 2025 14:58:04 +0000 (0:00:00.487) 0:00:19.318 ********* 2025-08-29 15:01:07.439875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:07.439893 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:07.439903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:07.439913 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.439929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:07.439939 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.439948 | orchestrator | 2025-08-29 15:01:07.439957 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 15:01:07.439972 | orchestrator | Friday 29 August 2025 14:58:08 +0000 (0:00:04.109) 0:00:23.427 ********* 2025-08-29 15:01:07.440000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:07.440016 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.440039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:07.440056 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.440078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:07.440102 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:07.440111 | orchestrator | 2025-08-29 15:01:07.440120 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 15:01:07.440129 | orchestrator | Friday 29 August 2025 14:58:11 +0000 (0:00:03.457) 0:00:26.884 ********* 2025-08-29 15:01:07.440138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:07.440148 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.440169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:07.440185 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.440195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:01:07.440204 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:07.440213 | orchestrator | 2025-08-29 15:01:07.440222 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-08-29 15:01:07.440231 | orchestrator | Friday 29 August 2025 14:58:15 +0000 (0:00:03.282) 0:00:30.166 ********* 2025-08-29 15:01:07.440252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:07.440268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:07.440287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:01:07.440307 | orchestrator | 2025-08-29 15:01:07.440317 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-08-29 15:01:07.440326 | orchestrator | Friday 29 August 2025 14:58:18 +0000 (0:00:03.493) 0:00:33.660 ********* 2025-08-29 15:01:07.440334 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:07.440343 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:07.440352 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:07.440361 | orchestrator | 2025-08-29 15:01:07.440370 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-08-29 15:01:07.440378 | orchestrator | Friday 29 August 2025 14:58:19 +0000 (0:00:00.954) 0:00:34.614 ********* 2025-08-29 15:01:07.440387 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:07.440397 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:07.440411 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:07.440426 | orchestrator | 2025-08-29 15:01:07.440438 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-08-29 15:01:07.440529 | orchestrator | Friday 29 August 2025 14:58:20 +0000 (0:00:00.714) 0:00:35.328 ********* 2025-08-29 15:01:07.440548 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:07.440561 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:07.440575 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:07.440588 | orchestrator | 2025-08-29 15:01:07.440602 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-08-29 15:01:07.440616 | orchestrator | Friday 29 August 2025 14:58:20 +0000 (0:00:00.474) 0:00:35.803 ********* 2025-08-29 15:01:07.440632 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-08-29 15:01:07.440646 | orchestrator | ...ignoring 2025-08-29 15:01:07.440660 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-08-29 15:01:07.440675 | orchestrator | ...ignoring 2025-08-29 15:01:07.440832 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-08-29 15:01:07.440873 | orchestrator | ...ignoring 2025-08-29 15:01:07.440888 | orchestrator | 2025-08-29 15:01:07.440902 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-08-29 15:01:07.440918 | orchestrator | Friday 29 August 2025 14:58:31 +0000 (0:00:11.065) 0:00:46.869 ********* 2025-08-29 15:01:07.440933 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:07.440947 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:07.440961 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:07.440976 | orchestrator | 2025-08-29 15:01:07.440991 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-08-29 15:01:07.441020 | orchestrator | Friday 29 August 2025 14:58:32 +0000 (0:00:00.433) 0:00:47.302 ********* 2025-08-29 15:01:07.441034 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:07.441050 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.441063 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.441076 | orchestrator | 2025-08-29 15:01:07.441090 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-08-29 15:01:07.441105 | orchestrator | Friday 29 August 2025 14:58:33 +0000 (0:00:00.706) 0:00:48.009 ********* 2025-08-29 15:01:07.441121 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:07.441136 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.441150 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.441164 | orchestrator | 2025-08-29 15:01:07.441179 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-08-29 15:01:07.441188 | orchestrator | Friday 29 August 2025 14:58:33 +0000 (0:00:00.400) 0:00:48.410 ********* 2025-08-29 15:01:07.441198 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:07.441212 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.441226 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.441240 | orchestrator | 2025-08-29 15:01:07.441255 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-08-29 15:01:07.441270 | orchestrator | Friday 29 August 2025 14:58:33 +0000 (0:00:00.382) 0:00:48.792 ********* 2025-08-29 15:01:07.441284 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:07.441299 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:07.441314 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:07.441328 | orchestrator | 2025-08-29 15:01:07.441342 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-08-29 15:01:07.441357 | orchestrator | Friday 29 August 2025 14:58:34 +0000 (0:00:00.420) 0:00:49.213 ********* 2025-08-29 15:01:07.441389 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:07.441404 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.441418 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.441432 | orchestrator | 2025-08-29 15:01:07.441447 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:01:07.441461 | orchestrator | Friday 29 August 2025 14:58:34 +0000 (0:00:00.760) 0:00:49.973 ********* 2025-08-29 15:01:07.441476 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.441490 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.441504 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-08-29 15:01:07.441519 | orchestrator | 2025-08-29 15:01:07.441532 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-08-29 15:01:07.441546 | orchestrator | Friday 29 August 2025 14:58:35 +0000 (0:00:00.351) 0:00:50.325 ********* 2025-08-29 15:01:07.441570 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:07.441586 | orchestrator | 2025-08-29 15:01:07.441603 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-08-29 15:01:07.441620 | orchestrator | Friday 29 August 2025 14:58:45 +0000 (0:00:09.994) 0:01:00.319 ********* 2025-08-29 15:01:07.441635 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:07.441661 | orchestrator | 2025-08-29 15:01:07.441678 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:01:07.441696 | orchestrator | Friday 29 August 2025 14:58:45 +0000 (0:00:00.183) 0:01:00.503 ********* 2025-08-29 15:01:07.441710 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:07.441724 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.441739 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.441753 | orchestrator | 2025-08-29 15:01:07.441767 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-08-29 15:01:07.441851 | orchestrator | Friday 29 August 2025 14:58:46 +0000 (0:00:01.146) 0:01:01.650 ********* 2025-08-29 15:01:07.441870 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:07.441887 | orchestrator | 2025-08-29 15:01:07.441901 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-08-29 15:01:07.441931 | orchestrator | Friday 29 August 2025 14:58:55 +0000 (0:00:08.730) 0:01:10.380 ********* 2025-08-29 15:01:07.441948 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:07.441963 | orchestrator | 2025-08-29 15:01:07.441977 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-08-29 15:01:07.441992 | orchestrator | Friday 29 August 2025 14:58:57 +0000 (0:00:01.623) 0:01:12.004 ********* 2025-08-29 15:01:07.442007 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:07.442120 | orchestrator | 2025-08-29 15:01:07.442133 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-08-29 15:01:07.442145 | orchestrator | Friday 29 August 2025 14:58:59 +0000 (0:00:02.721) 0:01:14.726 ********* 2025-08-29 15:01:07.442158 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:07.442171 | orchestrator | 2025-08-29 15:01:07.442184 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-08-29 15:01:07.442197 | orchestrator | Friday 29 August 2025 14:58:59 +0000 (0:00:00.135) 0:01:14.861 ********* 2025-08-29 15:01:07.442210 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:07.442223 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.442237 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.442251 | orchestrator | 2025-08-29 15:01:07.442265 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-08-29 15:01:07.442279 | orchestrator | Friday 29 August 2025 14:59:00 +0000 (0:00:00.436) 0:01:15.297 ********* 2025-08-29 15:01:07.442291 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:07.442304 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-08-29 15:01:07.442316 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:07.442328 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:07.442341 | orchestrator | 2025-08-29 15:01:07.442355 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-08-29 15:01:07.442368 | orchestrator | skipping: no hosts matched 2025-08-29 15:01:07.442381 | orchestrator | 2025-08-29 15:01:07.442395 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 15:01:07.442408 | orchestrator | 2025-08-29 15:01:07.442420 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 15:01:07.442433 | orchestrator | Friday 29 August 2025 14:59:00 +0000 (0:00:00.671) 0:01:15.969 ********* 2025-08-29 15:01:07.442445 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:07.442458 | orchestrator | 2025-08-29 15:01:07.442471 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 15:01:07.442483 | orchestrator | Friday 29 August 2025 14:59:21 +0000 (0:00:20.178) 0:01:36.147 ********* 2025-08-29 15:01:07.442496 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:07.442508 | orchestrator | 2025-08-29 15:01:07.442520 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 15:01:07.442532 | orchestrator | Friday 29 August 2025 14:59:41 +0000 (0:00:20.602) 0:01:56.750 ********* 2025-08-29 15:01:07.442544 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:07.442556 | orchestrator | 2025-08-29 15:01:07.442568 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 15:01:07.442580 | orchestrator | 2025-08-29 15:01:07.442592 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 15:01:07.442604 | orchestrator | Friday 29 August 2025 14:59:44 +0000 (0:00:02.570) 0:01:59.321 ********* 2025-08-29 15:01:07.442616 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:07.442628 | orchestrator | 2025-08-29 15:01:07.442641 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 15:01:07.442654 | orchestrator | Friday 29 August 2025 15:00:06 +0000 (0:00:21.969) 0:02:21.291 ********* 2025-08-29 15:01:07.442667 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:07.442680 | orchestrator | 2025-08-29 15:01:07.442693 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 15:01:07.442707 | orchestrator | Friday 29 August 2025 15:00:26 +0000 (0:00:20.585) 0:02:41.876 ********* 2025-08-29 15:01:07.442732 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:07.442744 | orchestrator | 2025-08-29 15:01:07.442757 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-08-29 15:01:07.442769 | orchestrator | 2025-08-29 15:01:07.442819 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 15:01:07.442833 | orchestrator | Friday 29 August 2025 15:00:29 +0000 (0:00:02.749) 0:02:44.626 ********* 2025-08-29 15:01:07.442847 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:07.442860 | orchestrator | 2025-08-29 15:01:07.442873 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 15:01:07.442886 | orchestrator | Friday 29 August 2025 15:00:47 +0000 (0:00:17.984) 0:03:02.610 ********* 2025-08-29 15:01:07.442899 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:07.442911 | orchestrator | 2025-08-29 15:01:07.442924 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 15:01:07.442937 | orchestrator | Friday 29 August 2025 15:00:48 +0000 (0:00:00.559) 0:03:03.169 ********* 2025-08-29 15:01:07.442950 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:07.442963 | orchestrator | 2025-08-29 15:01:07.442985 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-08-29 15:01:07.442999 | orchestrator | 2025-08-29 15:01:07.443011 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-08-29 15:01:07.443024 | orchestrator | Friday 29 August 2025 15:00:51 +0000 (0:00:03.195) 0:03:06.364 ********* 2025-08-29 15:01:07.443037 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:07.443050 | orchestrator | 2025-08-29 15:01:07.443063 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-08-29 15:01:07.443075 | orchestrator | Friday 29 August 2025 15:00:51 +0000 (0:00:00.535) 0:03:06.900 ********* 2025-08-29 15:01:07.443088 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.443099 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.443112 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:07.443125 | orchestrator | 2025-08-29 15:01:07.443137 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-08-29 15:01:07.443149 | orchestrator | Friday 29 August 2025 15:00:54 +0000 (0:00:02.219) 0:03:09.120 ********* 2025-08-29 15:01:07.443162 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.443174 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.443186 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:07.443199 | orchestrator | 2025-08-29 15:01:07.443211 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-08-29 15:01:07.443223 | orchestrator | Friday 29 August 2025 15:00:56 +0000 (0:00:02.265) 0:03:11.385 ********* 2025-08-29 15:01:07.443236 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.443248 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.443260 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:07.443273 | orchestrator | 2025-08-29 15:01:07.443286 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-08-29 15:01:07.443299 | orchestrator | Friday 29 August 2025 15:00:58 +0000 (0:00:02.199) 0:03:13.584 ********* 2025-08-29 15:01:07.443312 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.443324 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.443337 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:07.443350 | orchestrator | 2025-08-29 15:01:07.443362 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-08-29 15:01:07.443374 | orchestrator | Friday 29 August 2025 15:01:00 +0000 (0:00:02.304) 0:03:15.889 ********* 2025-08-29 15:01:07.443387 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:07.443400 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:07.443412 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:07.443425 | orchestrator | 2025-08-29 15:01:07.443437 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-08-29 15:01:07.443461 | orchestrator | Friday 29 August 2025 15:01:04 +0000 (0:00:03.849) 0:03:19.738 ********* 2025-08-29 15:01:07.443474 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:07.443487 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:07.443499 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:07.443512 | orchestrator | 2025-08-29 15:01:07.443524 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:01:07.443538 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 15:01:07.443552 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-08-29 15:01:07.443566 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 15:01:07.443580 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 15:01:07.443593 | orchestrator | 2025-08-29 15:01:07.443605 | orchestrator | 2025-08-29 15:01:07.443618 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:01:07.443632 | orchestrator | Friday 29 August 2025 15:01:05 +0000 (0:00:00.654) 0:03:20.393 ********* 2025-08-29 15:01:07.443645 | orchestrator | =============================================================================== 2025-08-29 15:01:07.443657 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.15s 2025-08-29 15:01:07.443670 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.19s 2025-08-29 15:01:07.443682 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.98s 2025-08-29 15:01:07.443694 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.07s 2025-08-29 15:01:07.443708 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.99s 2025-08-29 15:01:07.443722 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.73s 2025-08-29 15:01:07.443746 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.32s 2025-08-29 15:01:07.443759 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.17s 2025-08-29 15:01:07.443771 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.11s 2025-08-29 15:01:07.443804 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.85s 2025-08-29 15:01:07.443818 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.74s 2025-08-29 15:01:07.443831 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.49s 2025-08-29 15:01:07.443844 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.46s 2025-08-29 15:01:07.443856 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.28s 2025-08-29 15:01:07.443877 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.20s 2025-08-29 15:01:07.443890 | orchestrator | Check MariaDB service --------------------------------------------------- 2.82s 2025-08-29 15:01:07.443903 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.75s 2025-08-29 15:01:07.443916 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.72s 2025-08-29 15:01:07.443930 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.30s 2025-08-29 15:01:07.443943 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.27s 2025-08-29 15:01:07.443956 | orchestrator | 2025-08-29 15:01:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:10.491848 | orchestrator | 2025-08-29 15:01:10 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:10.493397 | orchestrator | 2025-08-29 15:01:10 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:10.495182 | orchestrator | 2025-08-29 15:01:10 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:10.495568 | orchestrator | 2025-08-29 15:01:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:13.533656 | orchestrator | 2025-08-29 15:01:13 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:13.535316 | orchestrator | 2025-08-29 15:01:13 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:13.536158 | orchestrator | 2025-08-29 15:01:13 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:13.536187 | orchestrator | 2025-08-29 15:01:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:16.582272 | orchestrator | 2025-08-29 15:01:16 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:16.583588 | orchestrator | 2025-08-29 15:01:16 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:16.584941 | orchestrator | 2025-08-29 15:01:16 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:16.584967 | orchestrator | 2025-08-29 15:01:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:19.624876 | orchestrator | 2025-08-29 15:01:19 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:19.628991 | orchestrator | 2025-08-29 15:01:19 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:19.631020 | orchestrator | 2025-08-29 15:01:19 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:19.631049 | orchestrator | 2025-08-29 15:01:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:22.675265 | orchestrator | 2025-08-29 15:01:22 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:22.678882 | orchestrator | 2025-08-29 15:01:22 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:22.680586 | orchestrator | 2025-08-29 15:01:22 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:22.683311 | orchestrator | 2025-08-29 15:01:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:25.731819 | orchestrator | 2025-08-29 15:01:25 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:25.733820 | orchestrator | 2025-08-29 15:01:25 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:25.736335 | orchestrator | 2025-08-29 15:01:25 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:25.737330 | orchestrator | 2025-08-29 15:01:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:28.779048 | orchestrator | 2025-08-29 15:01:28 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:28.781839 | orchestrator | 2025-08-29 15:01:28 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:28.782798 | orchestrator | 2025-08-29 15:01:28 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:28.782845 | orchestrator | 2025-08-29 15:01:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:31.830105 | orchestrator | 2025-08-29 15:01:31 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:31.830130 | orchestrator | 2025-08-29 15:01:31 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:31.830308 | orchestrator | 2025-08-29 15:01:31 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:31.830473 | orchestrator | 2025-08-29 15:01:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:34.871479 | orchestrator | 2025-08-29 15:01:34 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:34.872984 | orchestrator | 2025-08-29 15:01:34 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:34.875116 | orchestrator | 2025-08-29 15:01:34 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:34.875171 | orchestrator | 2025-08-29 15:01:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:37.913009 | orchestrator | 2025-08-29 15:01:37 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:37.914445 | orchestrator | 2025-08-29 15:01:37 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:37.916236 | orchestrator | 2025-08-29 15:01:37 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:37.916972 | orchestrator | 2025-08-29 15:01:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:40.964234 | orchestrator | 2025-08-29 15:01:40 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:40.965701 | orchestrator | 2025-08-29 15:01:40 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:40.967309 | orchestrator | 2025-08-29 15:01:40 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:40.967386 | orchestrator | 2025-08-29 15:01:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:44.003204 | orchestrator | 2025-08-29 15:01:44 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:44.004650 | orchestrator | 2025-08-29 15:01:44 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:44.007005 | orchestrator | 2025-08-29 15:01:44 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:44.007044 | orchestrator | 2025-08-29 15:01:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:47.050255 | orchestrator | 2025-08-29 15:01:47 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:47.052389 | orchestrator | 2025-08-29 15:01:47 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:47.054153 | orchestrator | 2025-08-29 15:01:47 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:47.054211 | orchestrator | 2025-08-29 15:01:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:50.098209 | orchestrator | 2025-08-29 15:01:50 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:50.098690 | orchestrator | 2025-08-29 15:01:50 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:50.100307 | orchestrator | 2025-08-29 15:01:50 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:50.100333 | orchestrator | 2025-08-29 15:01:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:53.153419 | orchestrator | 2025-08-29 15:01:53 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:53.155934 | orchestrator | 2025-08-29 15:01:53 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:53.157364 | orchestrator | 2025-08-29 15:01:53 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:53.157438 | orchestrator | 2025-08-29 15:01:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:56.208109 | orchestrator | 2025-08-29 15:01:56 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:56.210179 | orchestrator | 2025-08-29 15:01:56 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:56.212632 | orchestrator | 2025-08-29 15:01:56 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:56.212689 | orchestrator | 2025-08-29 15:01:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:59.256676 | orchestrator | 2025-08-29 15:01:59 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:01:59.257614 | orchestrator | 2025-08-29 15:01:59 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:01:59.261030 | orchestrator | 2025-08-29 15:01:59 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:01:59.261088 | orchestrator | 2025-08-29 15:01:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:02.310272 | orchestrator | 2025-08-29 15:02:02 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:02.311346 | orchestrator | 2025-08-29 15:02:02 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:02:02.312842 | orchestrator | 2025-08-29 15:02:02 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:02.312928 | orchestrator | 2025-08-29 15:02:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:05.359855 | orchestrator | 2025-08-29 15:02:05 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:05.360849 | orchestrator | 2025-08-29 15:02:05 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:02:05.363325 | orchestrator | 2025-08-29 15:02:05 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:05.363375 | orchestrator | 2025-08-29 15:02:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:08.411785 | orchestrator | 2025-08-29 15:02:08 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:08.412498 | orchestrator | 2025-08-29 15:02:08 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:02:08.414483 | orchestrator | 2025-08-29 15:02:08 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:08.414900 | orchestrator | 2025-08-29 15:02:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:11.469082 | orchestrator | 2025-08-29 15:02:11 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:11.473615 | orchestrator | 2025-08-29 15:02:11 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:02:11.476062 | orchestrator | 2025-08-29 15:02:11 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:11.476125 | orchestrator | 2025-08-29 15:02:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:14.526126 | orchestrator | 2025-08-29 15:02:14 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:14.527698 | orchestrator | 2025-08-29 15:02:14 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:02:14.530475 | orchestrator | 2025-08-29 15:02:14 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:14.530701 | orchestrator | 2025-08-29 15:02:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:17.575331 | orchestrator | 2025-08-29 15:02:17 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:17.576613 | orchestrator | 2025-08-29 15:02:17 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:02:17.579997 | orchestrator | 2025-08-29 15:02:17 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:17.580125 | orchestrator | 2025-08-29 15:02:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:20.623115 | orchestrator | 2025-08-29 15:02:20 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:20.624589 | orchestrator | 2025-08-29 15:02:20 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:02:20.625459 | orchestrator | 2025-08-29 15:02:20 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:20.625509 | orchestrator | 2025-08-29 15:02:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:23.676127 | orchestrator | 2025-08-29 15:02:23 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:23.679606 | orchestrator | 2025-08-29 15:02:23 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:02:23.682964 | orchestrator | 2025-08-29 15:02:23 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:23.683075 | orchestrator | 2025-08-29 15:02:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:26.716073 | orchestrator | 2025-08-29 15:02:26 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:26.718434 | orchestrator | 2025-08-29 15:02:26 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:02:26.720362 | orchestrator | 2025-08-29 15:02:26 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:26.720427 | orchestrator | 2025-08-29 15:02:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:29.777488 | orchestrator | 2025-08-29 15:02:29 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:29.780736 | orchestrator | 2025-08-29 15:02:29 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:02:29.784433 | orchestrator | 2025-08-29 15:02:29 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:29.784514 | orchestrator | 2025-08-29 15:02:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:32.831778 | orchestrator | 2025-08-29 15:02:32 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:32.833429 | orchestrator | 2025-08-29 15:02:32 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state STARTED 2025-08-29 15:02:32.835905 | orchestrator | 2025-08-29 15:02:32 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:32.835951 | orchestrator | 2025-08-29 15:02:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:35.890219 | orchestrator | 2025-08-29 15:02:35 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:35.892076 | orchestrator | 2025-08-29 15:02:35 | INFO  | Task c72b752a-0308-42f5-871c-c7dea1075cc2 is in state SUCCESS 2025-08-29 15:02:35.892276 | orchestrator | 2025-08-29 15:02:35.894212 | orchestrator | 2025-08-29 15:02:35.894235 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-08-29 15:02:35.894243 | orchestrator | 2025-08-29 15:02:35.894250 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 15:02:35.894277 | orchestrator | Friday 29 August 2025 15:00:19 +0000 (0:00:00.702) 0:00:00.702 ********* 2025-08-29 15:02:35.894284 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:02:35.894292 | orchestrator | 2025-08-29 15:02:35.894298 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 15:02:35.894305 | orchestrator | Friday 29 August 2025 15:00:20 +0000 (0:00:00.712) 0:00:01.414 ********* 2025-08-29 15:02:35.894311 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.894319 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.894325 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.894331 | orchestrator | 2025-08-29 15:02:35.894338 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 15:02:35.894344 | orchestrator | Friday 29 August 2025 15:00:20 +0000 (0:00:00.669) 0:00:02.083 ********* 2025-08-29 15:02:35.894350 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.894356 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.894362 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.894368 | orchestrator | 2025-08-29 15:02:35.894374 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 15:02:35.894381 | orchestrator | Friday 29 August 2025 15:00:21 +0000 (0:00:00.320) 0:00:02.404 ********* 2025-08-29 15:02:35.894387 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.894393 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.894399 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.894406 | orchestrator | 2025-08-29 15:02:35.894412 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 15:02:35.894419 | orchestrator | Friday 29 August 2025 15:00:22 +0000 (0:00:00.842) 0:00:03.247 ********* 2025-08-29 15:02:35.894425 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.894431 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.894437 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.894443 | orchestrator | 2025-08-29 15:02:35.894449 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 15:02:35.894455 | orchestrator | Friday 29 August 2025 15:00:22 +0000 (0:00:00.299) 0:00:03.546 ********* 2025-08-29 15:02:35.894461 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.894468 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.894474 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.894480 | orchestrator | 2025-08-29 15:02:35.894486 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 15:02:35.894492 | orchestrator | Friday 29 August 2025 15:00:22 +0000 (0:00:00.283) 0:00:03.830 ********* 2025-08-29 15:02:35.894498 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.894504 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.894510 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.894516 | orchestrator | 2025-08-29 15:02:35.894523 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 15:02:35.894529 | orchestrator | Friday 29 August 2025 15:00:23 +0000 (0:00:00.395) 0:00:04.225 ********* 2025-08-29 15:02:35.894535 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.894542 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.894548 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.894554 | orchestrator | 2025-08-29 15:02:35.894560 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 15:02:35.894567 | orchestrator | Friday 29 August 2025 15:00:23 +0000 (0:00:00.603) 0:00:04.829 ********* 2025-08-29 15:02:35.894573 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.894579 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.894585 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.894591 | orchestrator | 2025-08-29 15:02:35.894597 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 15:02:35.894604 | orchestrator | Friday 29 August 2025 15:00:24 +0000 (0:00:00.316) 0:00:05.145 ********* 2025-08-29 15:02:35.894610 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:02:35.894621 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:02:35.894639 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:02:35.894647 | orchestrator | 2025-08-29 15:02:35.894657 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 15:02:35.894668 | orchestrator | Friday 29 August 2025 15:00:24 +0000 (0:00:00.666) 0:00:05.811 ********* 2025-08-29 15:02:35.894678 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.894785 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.894796 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.894805 | orchestrator | 2025-08-29 15:02:35.894815 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 15:02:35.894825 | orchestrator | Friday 29 August 2025 15:00:25 +0000 (0:00:00.450) 0:00:06.262 ********* 2025-08-29 15:02:35.894835 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:02:35.894847 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:02:35.894858 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:02:35.894868 | orchestrator | 2025-08-29 15:02:35.894878 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 15:02:35.894890 | orchestrator | Friday 29 August 2025 15:00:27 +0000 (0:00:02.197) 0:00:08.460 ********* 2025-08-29 15:02:35.894898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:02:35.894905 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:02:35.894913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:02:35.894920 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.894927 | orchestrator | 2025-08-29 15:02:35.895239 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 15:02:35.895256 | orchestrator | Friday 29 August 2025 15:00:27 +0000 (0:00:00.413) 0:00:08.873 ********* 2025-08-29 15:02:35.895265 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.895274 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.895280 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.895287 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.895293 | orchestrator | 2025-08-29 15:02:35.895300 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 15:02:35.895306 | orchestrator | Friday 29 August 2025 15:00:28 +0000 (0:00:00.942) 0:00:09.816 ********* 2025-08-29 15:02:35.895314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.895324 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.895339 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.895346 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.895352 | orchestrator | 2025-08-29 15:02:35.895359 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 15:02:35.895365 | orchestrator | Friday 29 August 2025 15:00:28 +0000 (0:00:00.169) 0:00:09.985 ********* 2025-08-29 15:02:35.895379 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '16628aacf182', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 15:00:25.777552', 'end': '2025-08-29 15:00:25.825013', 'delta': '0:00:00.047461', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['16628aacf182'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-08-29 15:02:35.895389 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5025a8751a77', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 15:00:26.533314', 'end': '2025-08-29 15:00:26.577026', 'delta': '0:00:00.043712', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5025a8751a77'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-08-29 15:02:35.895405 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b09d4a495d08', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 15:00:27.168280', 'end': '2025-08-29 15:00:27.209349', 'delta': '0:00:00.041069', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b09d4a495d08'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-08-29 15:02:35.895416 | orchestrator | 2025-08-29 15:02:35.895425 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 15:02:35.895434 | orchestrator | Friday 29 August 2025 15:00:29 +0000 (0:00:00.405) 0:00:10.390 ********* 2025-08-29 15:02:35.895444 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.895454 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.895464 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.895474 | orchestrator | 2025-08-29 15:02:35.895484 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 15:02:35.895494 | orchestrator | Friday 29 August 2025 15:00:29 +0000 (0:00:00.444) 0:00:10.835 ********* 2025-08-29 15:02:35.895504 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-08-29 15:02:35.895515 | orchestrator | 2025-08-29 15:02:35.895534 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 15:02:35.895544 | orchestrator | Friday 29 August 2025 15:00:31 +0000 (0:00:01.752) 0:00:12.587 ********* 2025-08-29 15:02:35.895554 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.895564 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.895574 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.895580 | orchestrator | 2025-08-29 15:02:35.895586 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 15:02:35.895592 | orchestrator | Friday 29 August 2025 15:00:31 +0000 (0:00:00.356) 0:00:12.943 ********* 2025-08-29 15:02:35.895599 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.895605 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.895611 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.895617 | orchestrator | 2025-08-29 15:02:35.895623 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:02:35.895629 | orchestrator | Friday 29 August 2025 15:00:32 +0000 (0:00:00.424) 0:00:13.368 ********* 2025-08-29 15:02:35.895636 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.895642 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.895648 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.895654 | orchestrator | 2025-08-29 15:02:35.895663 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 15:02:35.895673 | orchestrator | Friday 29 August 2025 15:00:32 +0000 (0:00:00.590) 0:00:13.958 ********* 2025-08-29 15:02:35.895682 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.895713 | orchestrator | 2025-08-29 15:02:35.895722 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 15:02:35.895732 | orchestrator | Friday 29 August 2025 15:00:32 +0000 (0:00:00.148) 0:00:14.107 ********* 2025-08-29 15:02:35.895814 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.895822 | orchestrator | 2025-08-29 15:02:35.895828 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:02:35.895834 | orchestrator | Friday 29 August 2025 15:00:33 +0000 (0:00:00.246) 0:00:14.353 ********* 2025-08-29 15:02:35.895840 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.895847 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.895853 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.895859 | orchestrator | 2025-08-29 15:02:35.895865 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 15:02:35.895871 | orchestrator | Friday 29 August 2025 15:00:33 +0000 (0:00:00.309) 0:00:14.663 ********* 2025-08-29 15:02:35.895877 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.896139 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.896157 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.896168 | orchestrator | 2025-08-29 15:02:35.896175 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 15:02:35.896181 | orchestrator | Friday 29 August 2025 15:00:33 +0000 (0:00:00.355) 0:00:15.019 ********* 2025-08-29 15:02:35.896187 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.896193 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.896200 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.896206 | orchestrator | 2025-08-29 15:02:35.896212 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 15:02:35.896218 | orchestrator | Friday 29 August 2025 15:00:34 +0000 (0:00:00.557) 0:00:15.576 ********* 2025-08-29 15:02:35.896224 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.896230 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.896236 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.896242 | orchestrator | 2025-08-29 15:02:35.896249 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 15:02:35.896255 | orchestrator | Friday 29 August 2025 15:00:34 +0000 (0:00:00.344) 0:00:15.921 ********* 2025-08-29 15:02:35.896261 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.896267 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.896280 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.896287 | orchestrator | 2025-08-29 15:02:35.896293 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 15:02:35.896299 | orchestrator | Friday 29 August 2025 15:00:35 +0000 (0:00:00.412) 0:00:16.333 ********* 2025-08-29 15:02:35.896305 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.896311 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.896317 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.896323 | orchestrator | 2025-08-29 15:02:35.896329 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 15:02:35.896360 | orchestrator | Friday 29 August 2025 15:00:35 +0000 (0:00:00.388) 0:00:16.721 ********* 2025-08-29 15:02:35.896367 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.896374 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.896380 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.896386 | orchestrator | 2025-08-29 15:02:35.896406 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 15:02:35.896413 | orchestrator | Friday 29 August 2025 15:00:36 +0000 (0:00:00.614) 0:00:17.336 ********* 2025-08-29 15:02:35.896420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73f6d854--e6b6--54de--b399--c089d2858352-osd--block--73f6d854--e6b6--54de--b399--c089d2858352', 'dm-uuid-LVM-TEALsbrfrE7SLR1OalMwM0X8nCCTvLnFVAaUKmbkx6MVUCEiqPR6jSIbkRHXIqFa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b0db6b07--6be9--5d1b--9597--ea455233b3a1-osd--block--b0db6b07--6be9--5d1b--9597--ea455233b3a1', 'dm-uuid-LVM-V0mIVikotbLWfFY3h0eQCXH0vRmpIDcsVOLNkcVPBXWn7BukTDvVp0bBj80gOObg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8955e74f--f88a--5c8e--a869--5f490c143acc-osd--block--8955e74f--f88a--5c8e--a869--5f490c143acc', 'dm-uuid-LVM-GL8RBRk7JsbOtuMXFSkoGw73fN6hxG0ak4TjArrEISI2heGBlA4cRzgqY9nPbFnR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--73f6d854--e6b6--54de--b399--c089d2858352-osd--block--73f6d854--e6b6--54de--b399--c089d2858352'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V9ovz7-GIq2-tF1d-Owus-2UHr-v8sj-1Fxx35', 'scsi-0QEMU_QEMU_HARDDISK_b5eca971-d360-4d10-a7ea-637f4b5fbeee', 'scsi-SQEMU_QEMU_HARDDISK_b5eca971-d360-4d10-a7ea-637f4b5fbeee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76bc2ac4--c5cd--591d--a103--fddbd09e4373-osd--block--76bc2ac4--c5cd--591d--a103--fddbd09e4373', 'dm-uuid-LVM-P1Vrtaz3bb7hJ1aKWnFLuz2LxKSraMY7EYtxcho3wZvorudivDul03HJRET9qYqN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b0db6b07--6be9--5d1b--9597--ea455233b3a1-osd--block--b0db6b07--6be9--5d1b--9597--ea455233b3a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tds9kX-NVeh-oQdQ-yJw0-iJwc-Se6q-lU97tY', 'scsi-0QEMU_QEMU_HARDDISK_8530debd-d017-4f26-8837-9c6ea90d3888', 'scsi-SQEMU_QEMU_HARDDISK_8530debd-d017-4f26-8837-9c6ea90d3888'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994268b6-638b-49ae-9337-a5a883f2caf6', 'scsi-SQEMU_QEMU_HARDDISK_994268b6-638b-49ae-9337-a5a883f2caf6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896675 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.896681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part1', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part14', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part15', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part16', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896742 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8955e74f--f88a--5c8e--a869--5f490c143acc-osd--block--8955e74f--f88a--5c8e--a869--5f490c143acc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tAXKcI-A8hy-IioI-LWvB-km1w-baTb-bsyZta', 'scsi-0QEMU_QEMU_HARDDISK_cfe2d7b1-468c-4925-a2ef-e57e3e9904a6', 'scsi-SQEMU_QEMU_HARDDISK_cfe2d7b1-468c-4925-a2ef-e57e3e9904a6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--76bc2ac4--c5cd--591d--a103--fddbd09e4373-osd--block--76bc2ac4--c5cd--591d--a103--fddbd09e4373'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xBwyMF-FYxc-04qI-0fEU-AYWj-zcBC-304X7g', 'scsi-0QEMU_QEMU_HARDDISK_2af32ac1-7951-4112-a429-d0343cb67ad9', 'scsi-SQEMU_QEMU_HARDDISK_2af32ac1-7951-4112-a429-d0343cb67ad9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4cd25e5-e70d-493f-a3e8-ae6027cdfc98', 'scsi-SQEMU_QEMU_HARDDISK_e4cd25e5-e70d-493f-a3e8-ae6027cdfc98'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281-osd--block--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281', 'dm-uuid-LVM-Eq4cq90aBFRmjdeLun4eAe2ZEskDWwIf24G83ImBo8oYKublABAfbDRepl2GsjEU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-06-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896784 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.896795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde-osd--block--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde', 'dm-uuid-LVM-iKyJOsH78rtvgbLs6UfPuiIqW1omUJULgefd1komE3xEnVuXzInDXRsz03pmssLm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:02:35.896867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part1', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part14', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part15', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part16', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281-osd--block--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YYXEW6-66WM-VdEs-St5p-kHqo-tmbV-ohONUP', 'scsi-0QEMU_QEMU_HARDDISK_e2b81981-b087-421f-a1f1-ab20210f7cdd', 'scsi-SQEMU_QEMU_HARDDISK_e2b81981-b087-421f-a1f1-ab20210f7cdd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde-osd--block--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HIcRjp-yJfB-W5bx-nvaq-qvXo-41yG-dor8I7', 'scsi-0QEMU_QEMU_HARDDISK_4ecbb96d-8085-4234-aac0-aef459b35ca9', 'scsi-SQEMU_QEMU_HARDDISK_4ecbb96d-8085-4234-aac0-aef459b35ca9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d88be871-1de9-4c4e-96cc-2f99c6f9bcd6', 'scsi-SQEMU_QEMU_HARDDISK_d88be871-1de9-4c4e-96cc-2f99c6f9bcd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-06-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:02:35.896911 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.896917 | orchestrator | 2025-08-29 15:02:35.896923 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 15:02:35.896930 | orchestrator | Friday 29 August 2025 15:00:36 +0000 (0:00:00.625) 0:00:17.962 ********* 2025-08-29 15:02:35.896936 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73f6d854--e6b6--54de--b399--c089d2858352-osd--block--73f6d854--e6b6--54de--b399--c089d2858352', 'dm-uuid-LVM-TEALsbrfrE7SLR1OalMwM0X8nCCTvLnFVAaUKmbkx6MVUCEiqPR6jSIbkRHXIqFa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.896943 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b0db6b07--6be9--5d1b--9597--ea455233b3a1-osd--block--b0db6b07--6be9--5d1b--9597--ea455233b3a1', 'dm-uuid-LVM-V0mIVikotbLWfFY3h0eQCXH0vRmpIDcsVOLNkcVPBXWn7BukTDvVp0bBj80gOObg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.896954 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.896964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.896971 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.896982 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.896988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.896995 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897012 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_64be458b-ccc0-4bb0-97ba-3c13881a6e5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897036 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--73f6d854--e6b6--54de--b399--c089d2858352-osd--block--73f6d854--e6b6--54de--b399--c089d2858352'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V9ovz7-GIq2-tF1d-Owus-2UHr-v8sj-1Fxx35', 'scsi-0QEMU_QEMU_HARDDISK_b5eca971-d360-4d10-a7ea-637f4b5fbeee', 'scsi-SQEMU_QEMU_HARDDISK_b5eca971-d360-4d10-a7ea-637f4b5fbeee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897053 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b0db6b07--6be9--5d1b--9597--ea455233b3a1-osd--block--b0db6b07--6be9--5d1b--9597--ea455233b3a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tds9kX-NVeh-oQdQ-yJw0-iJwc-Se6q-lU97tY', 'scsi-0QEMU_QEMU_HARDDISK_8530debd-d017-4f26-8837-9c6ea90d3888', 'scsi-SQEMU_QEMU_HARDDISK_8530debd-d017-4f26-8837-9c6ea90d3888'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897060 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994268b6-638b-49ae-9337-a5a883f2caf6', 'scsi-SQEMU_QEMU_HARDDISK_994268b6-638b-49ae-9337-a5a883f2caf6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897070 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897077 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.897083 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281-osd--block--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281', 'dm-uuid-LVM-Eq4cq90aBFRmjdeLun4eAe2ZEskDWwIf24G83ImBo8oYKublABAfbDRepl2GsjEU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897090 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8955e74f--f88a--5c8e--a869--5f490c143acc-osd--block--8955e74f--f88a--5c8e--a869--5f490c143acc', 'dm-uuid-LVM-GL8RBRk7JsbOtuMXFSkoGw73fN6hxG0ak4TjArrEISI2heGBlA4cRzgqY9nPbFnR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897100 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde-osd--block--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde', 'dm-uuid-LVM-iKyJOsH78rtvgbLs6UfPuiIqW1omUJULgefd1komE3xEnVuXzInDXRsz03pmssLm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897122 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76bc2ac4--c5cd--591d--a103--fddbd09e4373-osd--block--76bc2ac4--c5cd--591d--a103--fddbd09e4373', 'dm-uuid-LVM-P1Vrtaz3bb7hJ1aKWnFLuz2LxKSraMY7EYtxcho3wZvorudivDul03HJRET9qYqN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897133 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897146 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897156 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897163 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897172 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897193 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897199 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897206 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897216 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897222 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897232 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897243 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part1', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part14', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part15', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part16', 'scsi-SQEMU_QEMU_HARDDISK_02781145-d1c2-4e4e-a6de-55bca7cba69d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897255 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897265 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281-osd--block--dc8c4f7f--2eb1--5ff6--8642--584f5da1f281'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YYXEW6-66WM-VdEs-St5p-kHqo-tmbV-ohONUP', 'scsi-0QEMU_QEMU_HARDDISK_e2b81981-b087-421f-a1f1-ab20210f7cdd', 'scsi-SQEMU_QEMU_HARDDISK_e2b81981-b087-421f-a1f1-ab20210f7cdd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897272 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897284 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde-osd--block--74173feb--4ed6--53ea--9fd2--1d4ff9ba2fde'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HIcRjp-yJfB-W5bx-nvaq-qvXo-41yG-dor8I7', 'scsi-0QEMU_QEMU_HARDDISK_4ecbb96d-8085-4234-aac0-aef459b35ca9', 'scsi-SQEMU_QEMU_HARDDISK_4ecbb96d-8085-4234-aac0-aef459b35ca9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897290 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897301 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d88be871-1de9-4c4e-96cc-2f99c6f9bcd6', 'scsi-SQEMU_QEMU_HARDDISK_d88be871-1de9-4c4e-96cc-2f99c6f9bcd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897308 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-06-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897322 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part1', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part14', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part15', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part16', 'scsi-SQEMU_QEMU_HARDDISK_b745d13d-bb5e-416e-bb08-91f06edea026-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897333 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.897340 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8955e74f--f88a--5c8e--a869--5f490c143acc-osd--block--8955e74f--f88a--5c8e--a869--5f490c143acc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tAXKcI-A8hy-IioI-LWvB-km1w-baTb-bsyZta', 'scsi-0QEMU_QEMU_HARDDISK_cfe2d7b1-468c-4925-a2ef-e57e3e9904a6', 'scsi-SQEMU_QEMU_HARDDISK_cfe2d7b1-468c-4925-a2ef-e57e3e9904a6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897349 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--76bc2ac4--c5cd--591d--a103--fddbd09e4373-osd--block--76bc2ac4--c5cd--591d--a103--fddbd09e4373'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xBwyMF-FYxc-04qI-0fEU-AYWj-zcBC-304X7g', 'scsi-0QEMU_QEMU_HARDDISK_2af32ac1-7951-4112-a429-d0343cb67ad9', 'scsi-SQEMU_QEMU_HARDDISK_2af32ac1-7951-4112-a429-d0343cb67ad9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897356 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4cd25e5-e70d-493f-a3e8-ae6027cdfc98', 'scsi-SQEMU_QEMU_HARDDISK_e4cd25e5-e70d-493f-a3e8-ae6027cdfc98'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897367 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-06-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:02:35.897373 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.897384 | orchestrator | 2025-08-29 15:02:35.897390 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 15:02:35.897396 | orchestrator | Friday 29 August 2025 15:00:37 +0000 (0:00:00.662) 0:00:18.624 ********* 2025-08-29 15:02:35.897403 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.897409 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.897415 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.897421 | orchestrator | 2025-08-29 15:02:35.897427 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 15:02:35.897434 | orchestrator | Friday 29 August 2025 15:00:38 +0000 (0:00:00.668) 0:00:19.293 ********* 2025-08-29 15:02:35.897440 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.897446 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.897452 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.897458 | orchestrator | 2025-08-29 15:02:35.897464 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:02:35.897471 | orchestrator | Friday 29 August 2025 15:00:38 +0000 (0:00:00.486) 0:00:19.779 ********* 2025-08-29 15:02:35.897477 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.897483 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.897489 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.897495 | orchestrator | 2025-08-29 15:02:35.897501 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:02:35.897507 | orchestrator | Friday 29 August 2025 15:00:39 +0000 (0:00:00.653) 0:00:20.432 ********* 2025-08-29 15:02:35.897514 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.897520 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.897526 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.897532 | orchestrator | 2025-08-29 15:02:35.897538 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:02:35.897544 | orchestrator | Friday 29 August 2025 15:00:39 +0000 (0:00:00.322) 0:00:20.754 ********* 2025-08-29 15:02:35.897550 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.897557 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.897563 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.897569 | orchestrator | 2025-08-29 15:02:35.897575 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:02:35.897581 | orchestrator | Friday 29 August 2025 15:00:40 +0000 (0:00:00.418) 0:00:21.173 ********* 2025-08-29 15:02:35.897587 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.897593 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.897599 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.897605 | orchestrator | 2025-08-29 15:02:35.897612 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 15:02:35.897618 | orchestrator | Friday 29 August 2025 15:00:40 +0000 (0:00:00.566) 0:00:21.739 ********* 2025-08-29 15:02:35.897624 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 15:02:35.897630 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 15:02:35.897636 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 15:02:35.897642 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 15:02:35.897648 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 15:02:35.897655 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 15:02:35.897661 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 15:02:35.897667 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 15:02:35.897673 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 15:02:35.897679 | orchestrator | 2025-08-29 15:02:35.897706 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 15:02:35.897713 | orchestrator | Friday 29 August 2025 15:00:41 +0000 (0:00:00.938) 0:00:22.678 ********* 2025-08-29 15:02:35.897723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:02:35.897730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:02:35.897742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:02:35.897748 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.897755 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 15:02:35.897761 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 15:02:35.897767 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 15:02:35.897773 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.897779 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 15:02:35.897785 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 15:02:35.897791 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 15:02:35.897797 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.897803 | orchestrator | 2025-08-29 15:02:35.897810 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 15:02:35.897816 | orchestrator | Friday 29 August 2025 15:00:41 +0000 (0:00:00.387) 0:00:23.066 ********* 2025-08-29 15:02:35.897822 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:02:35.897829 | orchestrator | 2025-08-29 15:02:35.897835 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 15:02:35.897841 | orchestrator | Friday 29 August 2025 15:00:42 +0000 (0:00:00.752) 0:00:23.819 ********* 2025-08-29 15:02:35.897848 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.897854 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.897860 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.897866 | orchestrator | 2025-08-29 15:02:35.897876 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 15:02:35.897882 | orchestrator | Friday 29 August 2025 15:00:43 +0000 (0:00:00.325) 0:00:24.145 ********* 2025-08-29 15:02:35.897888 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.897894 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.897900 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.897907 | orchestrator | 2025-08-29 15:02:35.897913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 15:02:35.897919 | orchestrator | Friday 29 August 2025 15:00:43 +0000 (0:00:00.313) 0:00:24.458 ********* 2025-08-29 15:02:35.897925 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.897932 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.897938 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:02:35.897944 | orchestrator | 2025-08-29 15:02:35.897950 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 15:02:35.897956 | orchestrator | Friday 29 August 2025 15:00:43 +0000 (0:00:00.332) 0:00:24.790 ********* 2025-08-29 15:02:35.897963 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.897969 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.897975 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.897981 | orchestrator | 2025-08-29 15:02:35.897987 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 15:02:35.897993 | orchestrator | Friday 29 August 2025 15:00:44 +0000 (0:00:00.638) 0:00:25.429 ********* 2025-08-29 15:02:35.898000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:02:35.898006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:02:35.898040 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:02:35.898048 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.898054 | orchestrator | 2025-08-29 15:02:35.898060 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 15:02:35.898067 | orchestrator | Friday 29 August 2025 15:00:44 +0000 (0:00:00.403) 0:00:25.832 ********* 2025-08-29 15:02:35.898073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:02:35.898084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:02:35.898090 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:02:35.898096 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.898102 | orchestrator | 2025-08-29 15:02:35.898108 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 15:02:35.898114 | orchestrator | Friday 29 August 2025 15:00:45 +0000 (0:00:00.400) 0:00:26.233 ********* 2025-08-29 15:02:35.898120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:02:35.898127 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:02:35.898133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:02:35.898139 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.898145 | orchestrator | 2025-08-29 15:02:35.898151 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 15:02:35.898157 | orchestrator | Friday 29 August 2025 15:00:45 +0000 (0:00:00.396) 0:00:26.630 ********* 2025-08-29 15:02:35.898164 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:02:35.898170 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:02:35.898176 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:02:35.898182 | orchestrator | 2025-08-29 15:02:35.898188 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 15:02:35.898194 | orchestrator | Friday 29 August 2025 15:00:45 +0000 (0:00:00.329) 0:00:26.959 ********* 2025-08-29 15:02:35.898200 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:02:35.898207 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:02:35.898213 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 15:02:35.898219 | orchestrator | 2025-08-29 15:02:35.898225 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 15:02:35.898231 | orchestrator | Friday 29 August 2025 15:00:46 +0000 (0:00:00.650) 0:00:27.610 ********* 2025-08-29 15:02:35.898237 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:02:35.898247 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:02:35.898253 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:02:35.898259 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 15:02:35.898266 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:02:35.898272 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:02:35.898278 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:02:35.898284 | orchestrator | 2025-08-29 15:02:35.898290 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 15:02:35.898296 | orchestrator | Friday 29 August 2025 15:00:47 +0000 (0:00:01.083) 0:00:28.693 ********* 2025-08-29 15:02:35.898302 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:02:35.898309 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:02:35.898315 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:02:35.898397 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 15:02:35.898404 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:02:35.898410 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:02:35.898417 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:02:35.898423 | orchestrator | 2025-08-29 15:02:35.898434 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-08-29 15:02:35.898440 | orchestrator | Friday 29 August 2025 15:00:49 +0000 (0:00:02.225) 0:00:30.919 ********* 2025-08-29 15:02:35.898452 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:02:35.898458 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:02:35.898466 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-08-29 15:02:35.898475 | orchestrator | 2025-08-29 15:02:35.898485 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-08-29 15:02:35.898494 | orchestrator | Friday 29 August 2025 15:00:50 +0000 (0:00:00.417) 0:00:31.336 ********* 2025-08-29 15:02:35.898504 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:02:35.898514 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:02:35.898523 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:02:35.898533 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:02:35.898543 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:02:35.898553 | orchestrator | 2025-08-29 15:02:35.898563 | orchestrator | TASK [generate keys] *********************************************************** 2025-08-29 15:02:35.898573 | orchestrator | Friday 29 August 2025 15:01:36 +0000 (0:00:46.762) 0:01:18.098 ********* 2025-08-29 15:02:35.898584 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898593 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898604 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898612 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898618 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898625 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898631 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-08-29 15:02:35.898637 | orchestrator | 2025-08-29 15:02:35.898643 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-08-29 15:02:35.898654 | orchestrator | Friday 29 August 2025 15:02:02 +0000 (0:00:25.386) 0:01:43.484 ********* 2025-08-29 15:02:35.898660 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898667 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898673 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898679 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898700 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898707 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898713 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:02:35.898725 | orchestrator | 2025-08-29 15:02:35.898732 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-08-29 15:02:35.898738 | orchestrator | Friday 29 August 2025 15:02:14 +0000 (0:00:12.343) 0:01:55.828 ********* 2025-08-29 15:02:35.898744 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898750 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:02:35.898756 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:02:35.898763 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898769 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:02:35.898775 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:02:35.898831 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898840 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:02:35.898846 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:02:35.898852 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898858 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:02:35.898864 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:02:35.898871 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898877 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:02:35.898883 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:02:35.898889 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:02:35.898895 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:02:35.898901 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:02:35.898907 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-08-29 15:02:35.898914 | orchestrator | 2025-08-29 15:02:35.898920 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:02:35.898926 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-08-29 15:02:35.898934 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 15:02:35.898941 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 15:02:35.898947 | orchestrator | 2025-08-29 15:02:35.898953 | orchestrator | 2025-08-29 15:02:35.898959 | orchestrator | 2025-08-29 15:02:35.898965 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:02:35.898971 | orchestrator | Friday 29 August 2025 15:02:32 +0000 (0:00:17.825) 0:02:13.653 ********* 2025-08-29 15:02:35.898977 | orchestrator | =============================================================================== 2025-08-29 15:02:35.898983 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.76s 2025-08-29 15:02:35.898990 | orchestrator | generate keys ---------------------------------------------------------- 25.39s 2025-08-29 15:02:35.898996 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.83s 2025-08-29 15:02:35.899002 | orchestrator | get keys from monitors ------------------------------------------------- 12.34s 2025-08-29 15:02:35.899008 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.23s 2025-08-29 15:02:35.899021 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.20s 2025-08-29 15:02:35.899027 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.75s 2025-08-29 15:02:35.899035 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.08s 2025-08-29 15:02:35.899045 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.94s 2025-08-29 15:02:35.899055 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.94s 2025-08-29 15:02:35.899063 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.84s 2025-08-29 15:02:35.899072 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.75s 2025-08-29 15:02:35.899085 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.71s 2025-08-29 15:02:35.899094 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.67s 2025-08-29 15:02:35.899103 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.67s 2025-08-29 15:02:35.899111 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2025-08-29 15:02:35.899120 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.66s 2025-08-29 15:02:35.899128 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2025-08-29 15:02:35.899137 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.65s 2025-08-29 15:02:35.899146 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.64s 2025-08-29 15:02:35.899154 | orchestrator | 2025-08-29 15:02:35 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:35.899163 | orchestrator | 2025-08-29 15:02:35 | INFO  | Task 47042be7-e187-4cd2-a2b8-9dc0b533c8b4 is in state STARTED 2025-08-29 15:02:35.899178 | orchestrator | 2025-08-29 15:02:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:38.939858 | orchestrator | 2025-08-29 15:02:38 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:38.941532 | orchestrator | 2025-08-29 15:02:38 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:38.943963 | orchestrator | 2025-08-29 15:02:38 | INFO  | Task 47042be7-e187-4cd2-a2b8-9dc0b533c8b4 is in state STARTED 2025-08-29 15:02:38.944000 | orchestrator | 2025-08-29 15:02:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:41.985277 | orchestrator | 2025-08-29 15:02:41 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:41.988274 | orchestrator | 2025-08-29 15:02:41 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:41.991371 | orchestrator | 2025-08-29 15:02:41 | INFO  | Task 47042be7-e187-4cd2-a2b8-9dc0b533c8b4 is in state STARTED 2025-08-29 15:02:41.991443 | orchestrator | 2025-08-29 15:02:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:45.042706 | orchestrator | 2025-08-29 15:02:45 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:45.043451 | orchestrator | 2025-08-29 15:02:45 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:45.045315 | orchestrator | 2025-08-29 15:02:45 | INFO  | Task 47042be7-e187-4cd2-a2b8-9dc0b533c8b4 is in state STARTED 2025-08-29 15:02:45.045442 | orchestrator | 2025-08-29 15:02:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:48.102638 | orchestrator | 2025-08-29 15:02:48 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:48.103439 | orchestrator | 2025-08-29 15:02:48 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:48.105125 | orchestrator | 2025-08-29 15:02:48 | INFO  | Task 47042be7-e187-4cd2-a2b8-9dc0b533c8b4 is in state STARTED 2025-08-29 15:02:48.105192 | orchestrator | 2025-08-29 15:02:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:51.164191 | orchestrator | 2025-08-29 15:02:51 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:51.166722 | orchestrator | 2025-08-29 15:02:51 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:51.169737 | orchestrator | 2025-08-29 15:02:51 | INFO  | Task 47042be7-e187-4cd2-a2b8-9dc0b533c8b4 is in state STARTED 2025-08-29 15:02:51.170164 | orchestrator | 2025-08-29 15:02:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:54.224267 | orchestrator | 2025-08-29 15:02:54 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:54.227786 | orchestrator | 2025-08-29 15:02:54 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:54.230558 | orchestrator | 2025-08-29 15:02:54 | INFO  | Task 47042be7-e187-4cd2-a2b8-9dc0b533c8b4 is in state STARTED 2025-08-29 15:02:54.230891 | orchestrator | 2025-08-29 15:02:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:57.278091 | orchestrator | 2025-08-29 15:02:57 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:02:57.279866 | orchestrator | 2025-08-29 15:02:57 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:02:57.283046 | orchestrator | 2025-08-29 15:02:57 | INFO  | Task 47042be7-e187-4cd2-a2b8-9dc0b533c8b4 is in state STARTED 2025-08-29 15:02:57.283174 | orchestrator | 2025-08-29 15:02:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:00.324150 | orchestrator | 2025-08-29 15:03:00 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:03:00.324288 | orchestrator | 2025-08-29 15:03:00 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:00.325628 | orchestrator | 2025-08-29 15:03:00 | INFO  | Task 47042be7-e187-4cd2-a2b8-9dc0b533c8b4 is in state STARTED 2025-08-29 15:03:00.325772 | orchestrator | 2025-08-29 15:03:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:03.385120 | orchestrator | 2025-08-29 15:03:03 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:03:03.385226 | orchestrator | 2025-08-29 15:03:03 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:03.385245 | orchestrator | 2025-08-29 15:03:03 | INFO  | Task 47042be7-e187-4cd2-a2b8-9dc0b533c8b4 is in state STARTED 2025-08-29 15:03:03.385254 | orchestrator | 2025-08-29 15:03:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:06.434164 | orchestrator | 2025-08-29 15:03:06 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state STARTED 2025-08-29 15:03:06.437225 | orchestrator | 2025-08-29 15:03:06 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:06.438898 | orchestrator | 2025-08-29 15:03:06 | INFO  | Task 47042be7-e187-4cd2-a2b8-9dc0b533c8b4 is in state SUCCESS 2025-08-29 15:03:06.439743 | orchestrator | 2025-08-29 15:03:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:09.501219 | orchestrator | 2025-08-29 15:03:09 | INFO  | Task fb12a173-d282-4b0f-8d75-ea7aacfe84c5 is in state SUCCESS 2025-08-29 15:03:09.506549 | orchestrator | 2025-08-29 15:03:09.506859 | orchestrator | 2025-08-29 15:03:09.506882 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-08-29 15:03:09.507187 | orchestrator | 2025-08-29 15:03:09.507202 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-08-29 15:03:09.507242 | orchestrator | Friday 29 August 2025 15:02:37 +0000 (0:00:00.171) 0:00:00.171 ********* 2025-08-29 15:03:09.507254 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-08-29 15:03:09.507267 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:09.507278 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:09.507292 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:03:09.507311 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:09.507338 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-08-29 15:03:09.507359 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-08-29 15:03:09.507377 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-08-29 15:03:09.507394 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-08-29 15:03:09.507411 | orchestrator | 2025-08-29 15:03:09.507428 | orchestrator | TASK [Create share directory] ************************************************** 2025-08-29 15:03:09.507444 | orchestrator | Friday 29 August 2025 15:02:42 +0000 (0:00:04.452) 0:00:04.623 ********* 2025-08-29 15:03:09.507464 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 15:03:09.507482 | orchestrator | 2025-08-29 15:03:09.507501 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-08-29 15:03:09.507520 | orchestrator | Friday 29 August 2025 15:02:43 +0000 (0:00:01.086) 0:00:05.710 ********* 2025-08-29 15:03:09.507539 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-08-29 15:03:09.507559 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:09.507576 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:09.507592 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:03:09.507610 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:09.507629 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-08-29 15:03:09.507850 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-08-29 15:03:09.507873 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-08-29 15:03:09.507892 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-08-29 15:03:09.507911 | orchestrator | 2025-08-29 15:03:09.507926 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-08-29 15:03:09.507937 | orchestrator | Friday 29 August 2025 15:02:58 +0000 (0:00:14.751) 0:00:20.462 ********* 2025-08-29 15:03:09.507967 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-08-29 15:03:09.507979 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:09.507990 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:09.508001 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:03:09.508012 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 15:03:09.508022 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-08-29 15:03:09.508033 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-08-29 15:03:09.508044 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-08-29 15:03:09.508069 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-08-29 15:03:09.508080 | orchestrator | 2025-08-29 15:03:09.508091 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:03:09.508102 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:03:09.508115 | orchestrator | 2025-08-29 15:03:09.508126 | orchestrator | 2025-08-29 15:03:09.508136 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:03:09.508147 | orchestrator | Friday 29 August 2025 15:03:05 +0000 (0:00:07.087) 0:00:27.549 ********* 2025-08-29 15:03:09.508158 | orchestrator | =============================================================================== 2025-08-29 15:03:09.508169 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.75s 2025-08-29 15:03:09.508180 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.09s 2025-08-29 15:03:09.508190 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.45s 2025-08-29 15:03:09.508201 | orchestrator | Create share directory -------------------------------------------------- 1.09s 2025-08-29 15:03:09.508212 | orchestrator | 2025-08-29 15:03:09.508223 | orchestrator | 2025-08-29 15:03:09.508234 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:03:09.508245 | orchestrator | 2025-08-29 15:03:09.508275 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:03:09.508287 | orchestrator | Friday 29 August 2025 15:01:11 +0000 (0:00:00.371) 0:00:00.371 ********* 2025-08-29 15:03:09.508298 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:09.508309 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:09.508320 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:09.508331 | orchestrator | 2025-08-29 15:03:09.508342 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:03:09.508354 | orchestrator | Friday 29 August 2025 15:01:11 +0000 (0:00:00.359) 0:00:00.731 ********* 2025-08-29 15:03:09.508364 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-08-29 15:03:09.508375 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-08-29 15:03:09.508386 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-08-29 15:03:09.508397 | orchestrator | 2025-08-29 15:03:09.508408 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-08-29 15:03:09.508419 | orchestrator | 2025-08-29 15:03:09.508430 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:03:09.508443 | orchestrator | Friday 29 August 2025 15:01:12 +0000 (0:00:00.489) 0:00:01.220 ********* 2025-08-29 15:03:09.508457 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:09.508470 | orchestrator | 2025-08-29 15:03:09.508482 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-08-29 15:03:09.508495 | orchestrator | Friday 29 August 2025 15:01:12 +0000 (0:00:00.546) 0:00:01.767 ********* 2025-08-29 15:03:09.508522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:09.508563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:09.508587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:09.508613 | orchestrator | 2025-08-29 15:03:09.508633 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-08-29 15:03:09.508696 | orchestrator | Friday 29 August 2025 15:01:13 +0000 (0:00:01.162) 0:00:02.929 ********* 2025-08-29 15:03:09.508719 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:09.508740 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:09.508760 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:09.508780 | orchestrator | 2025-08-29 15:03:09.508799 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:03:09.508818 | orchestrator | Friday 29 August 2025 15:01:14 +0000 (0:00:00.610) 0:00:03.539 ********* 2025-08-29 15:03:09.508843 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 15:03:09.508865 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 15:03:09.508893 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 15:03:09.508913 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 15:03:09.508933 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 15:03:09.508950 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 15:03:09.508969 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-08-29 15:03:09.508980 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 15:03:09.508991 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 15:03:09.509002 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 15:03:09.509013 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 15:03:09.509024 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 15:03:09.509035 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 15:03:09.509046 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 15:03:09.509057 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-08-29 15:03:09.509068 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 15:03:09.509079 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 15:03:09.509114 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 15:03:09.509125 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 15:03:09.509136 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 15:03:09.509147 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 15:03:09.509158 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 15:03:09.509169 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-08-29 15:03:09.509180 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 15:03:09.509192 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-08-29 15:03:09.509205 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-08-29 15:03:09.509217 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-08-29 15:03:09.509228 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-08-29 15:03:09.509246 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-08-29 15:03:09.509257 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-08-29 15:03:09.509268 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-08-29 15:03:09.509280 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-08-29 15:03:09.509291 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-08-29 15:03:09.509302 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-08-29 15:03:09.509313 | orchestrator | 2025-08-29 15:03:09.509324 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:09.509335 | orchestrator | Friday 29 August 2025 15:01:15 +0000 (0:00:00.945) 0:00:04.485 ********* 2025-08-29 15:03:09.509345 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:09.509357 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:09.509368 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:09.509379 | orchestrator | 2025-08-29 15:03:09.509390 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:09.509401 | orchestrator | Friday 29 August 2025 15:01:15 +0000 (0:00:00.375) 0:00:04.860 ********* 2025-08-29 15:03:09.509412 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.509423 | orchestrator | 2025-08-29 15:03:09.509434 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:09.509452 | orchestrator | Friday 29 August 2025 15:01:16 +0000 (0:00:00.208) 0:00:05.069 ********* 2025-08-29 15:03:09.509463 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.509475 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.509486 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.509497 | orchestrator | 2025-08-29 15:03:09.509509 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:09.509526 | orchestrator | Friday 29 August 2025 15:01:16 +0000 (0:00:00.658) 0:00:05.727 ********* 2025-08-29 15:03:09.509538 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:09.509549 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:09.509560 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:09.509572 | orchestrator | 2025-08-29 15:03:09.509583 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:09.509594 | orchestrator | Friday 29 August 2025 15:01:17 +0000 (0:00:00.369) 0:00:06.097 ********* 2025-08-29 15:03:09.509605 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.509616 | orchestrator | 2025-08-29 15:03:09.509627 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:09.509638 | orchestrator | Friday 29 August 2025 15:01:17 +0000 (0:00:00.147) 0:00:06.245 ********* 2025-08-29 15:03:09.509721 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.509744 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.509764 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.509784 | orchestrator | 2025-08-29 15:03:09.509801 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:09.509819 | orchestrator | Friday 29 August 2025 15:01:17 +0000 (0:00:00.324) 0:00:06.569 ********* 2025-08-29 15:03:09.509831 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:09.509842 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:09.509852 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:09.509863 | orchestrator | 2025-08-29 15:03:09.509874 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:09.509885 | orchestrator | Friday 29 August 2025 15:01:17 +0000 (0:00:00.355) 0:00:06.925 ********* 2025-08-29 15:03:09.510242 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.510266 | orchestrator | 2025-08-29 15:03:09.510278 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:09.510288 | orchestrator | Friday 29 August 2025 15:01:18 +0000 (0:00:00.138) 0:00:07.063 ********* 2025-08-29 15:03:09.510298 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.510309 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.510319 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.510329 | orchestrator | 2025-08-29 15:03:09.510339 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:09.510349 | orchestrator | Friday 29 August 2025 15:01:18 +0000 (0:00:00.633) 0:00:07.697 ********* 2025-08-29 15:03:09.510359 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:09.510370 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:09.510379 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:09.510390 | orchestrator | 2025-08-29 15:03:09.510399 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:09.510409 | orchestrator | Friday 29 August 2025 15:01:19 +0000 (0:00:00.342) 0:00:08.039 ********* 2025-08-29 15:03:09.510419 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.510430 | orchestrator | 2025-08-29 15:03:09.510440 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:09.510450 | orchestrator | Friday 29 August 2025 15:01:19 +0000 (0:00:00.140) 0:00:08.179 ********* 2025-08-29 15:03:09.510460 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.510469 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.510479 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.510489 | orchestrator | 2025-08-29 15:03:09.510499 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:09.510509 | orchestrator | Friday 29 August 2025 15:01:19 +0000 (0:00:00.296) 0:00:08.476 ********* 2025-08-29 15:03:09.510526 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:09.510537 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:09.510547 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:09.510556 | orchestrator | 2025-08-29 15:03:09.510566 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:09.510576 | orchestrator | Friday 29 August 2025 15:01:19 +0000 (0:00:00.306) 0:00:08.782 ********* 2025-08-29 15:03:09.510597 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.510607 | orchestrator | 2025-08-29 15:03:09.510617 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:09.510627 | orchestrator | Friday 29 August 2025 15:01:20 +0000 (0:00:00.513) 0:00:09.296 ********* 2025-08-29 15:03:09.510637 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.510673 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.510684 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.510694 | orchestrator | 2025-08-29 15:03:09.510704 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:09.510714 | orchestrator | Friday 29 August 2025 15:01:20 +0000 (0:00:00.392) 0:00:09.688 ********* 2025-08-29 15:03:09.510723 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:09.510737 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:09.510753 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:09.510768 | orchestrator | 2025-08-29 15:03:09.510784 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:09.510800 | orchestrator | Friday 29 August 2025 15:01:21 +0000 (0:00:00.418) 0:00:10.107 ********* 2025-08-29 15:03:09.510817 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.510834 | orchestrator | 2025-08-29 15:03:09.510851 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:09.510867 | orchestrator | Friday 29 August 2025 15:01:21 +0000 (0:00:00.214) 0:00:10.322 ********* 2025-08-29 15:03:09.510884 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.510894 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.510904 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.510913 | orchestrator | 2025-08-29 15:03:09.510923 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:09.510932 | orchestrator | Friday 29 August 2025 15:01:21 +0000 (0:00:00.439) 0:00:10.761 ********* 2025-08-29 15:03:09.510942 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:09.510951 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:09.510961 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:09.510971 | orchestrator | 2025-08-29 15:03:09.510993 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:09.511004 | orchestrator | Friday 29 August 2025 15:01:22 +0000 (0:00:00.548) 0:00:11.310 ********* 2025-08-29 15:03:09.511014 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.511023 | orchestrator | 2025-08-29 15:03:09.511033 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:09.511042 | orchestrator | Friday 29 August 2025 15:01:22 +0000 (0:00:00.144) 0:00:11.454 ********* 2025-08-29 15:03:09.511052 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.511062 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.511071 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.511081 | orchestrator | 2025-08-29 15:03:09.511090 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:09.511100 | orchestrator | Friday 29 August 2025 15:01:22 +0000 (0:00:00.328) 0:00:11.783 ********* 2025-08-29 15:03:09.511109 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:09.511119 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:09.511129 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:09.511138 | orchestrator | 2025-08-29 15:03:09.511148 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:09.511158 | orchestrator | Friday 29 August 2025 15:01:23 +0000 (0:00:00.399) 0:00:12.182 ********* 2025-08-29 15:03:09.511168 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.511178 | orchestrator | 2025-08-29 15:03:09.511187 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:09.511197 | orchestrator | Friday 29 August 2025 15:01:23 +0000 (0:00:00.145) 0:00:12.328 ********* 2025-08-29 15:03:09.511207 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.511217 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.511237 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.511247 | orchestrator | 2025-08-29 15:03:09.511257 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:09.511267 | orchestrator | Friday 29 August 2025 15:01:23 +0000 (0:00:00.325) 0:00:12.653 ********* 2025-08-29 15:03:09.511277 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:09.511286 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:09.511296 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:09.511305 | orchestrator | 2025-08-29 15:03:09.511315 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:09.511325 | orchestrator | Friday 29 August 2025 15:01:24 +0000 (0:00:00.607) 0:00:13.260 ********* 2025-08-29 15:03:09.511335 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.511344 | orchestrator | 2025-08-29 15:03:09.511354 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:09.511365 | orchestrator | Friday 29 August 2025 15:01:24 +0000 (0:00:00.154) 0:00:13.414 ********* 2025-08-29 15:03:09.511375 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.511384 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.511394 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.511403 | orchestrator | 2025-08-29 15:03:09.511413 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:03:09.511423 | orchestrator | Friday 29 August 2025 15:01:24 +0000 (0:00:00.314) 0:00:13.729 ********* 2025-08-29 15:03:09.511432 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:09.511442 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:09.511451 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:09.511461 | orchestrator | 2025-08-29 15:03:09.511470 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:03:09.511480 | orchestrator | Friday 29 August 2025 15:01:25 +0000 (0:00:00.380) 0:00:14.110 ********* 2025-08-29 15:03:09.511490 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.511500 | orchestrator | 2025-08-29 15:03:09.511509 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:03:09.511525 | orchestrator | Friday 29 August 2025 15:01:25 +0000 (0:00:00.152) 0:00:14.263 ********* 2025-08-29 15:03:09.511535 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.511545 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.511555 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.511564 | orchestrator | 2025-08-29 15:03:09.511574 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-08-29 15:03:09.511585 | orchestrator | Friday 29 August 2025 15:01:25 +0000 (0:00:00.588) 0:00:14.851 ********* 2025-08-29 15:03:09.511594 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:09.511604 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:09.511614 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:09.511624 | orchestrator | 2025-08-29 15:03:09.511633 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-08-29 15:03:09.511670 | orchestrator | Friday 29 August 2025 15:01:28 +0000 (0:00:02.179) 0:00:17.031 ********* 2025-08-29 15:03:09.511682 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 15:03:09.511692 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 15:03:09.511702 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 15:03:09.511712 | orchestrator | 2025-08-29 15:03:09.511722 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-08-29 15:03:09.511731 | orchestrator | Friday 29 August 2025 15:01:30 +0000 (0:00:02.076) 0:00:19.107 ********* 2025-08-29 15:03:09.511742 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 15:03:09.511752 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 15:03:09.511769 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 15:03:09.511779 | orchestrator | 2025-08-29 15:03:09.511795 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-08-29 15:03:09.511812 | orchestrator | Friday 29 August 2025 15:01:32 +0000 (0:00:02.655) 0:00:21.763 ********* 2025-08-29 15:03:09.511839 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 15:03:09.511858 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 15:03:09.511876 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 15:03:09.511891 | orchestrator | 2025-08-29 15:03:09.511907 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-08-29 15:03:09.511917 | orchestrator | Friday 29 August 2025 15:01:35 +0000 (0:00:02.297) 0:00:24.061 ********* 2025-08-29 15:03:09.511927 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.511937 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.511947 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.511957 | orchestrator | 2025-08-29 15:03:09.511967 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-08-29 15:03:09.511976 | orchestrator | Friday 29 August 2025 15:01:35 +0000 (0:00:00.407) 0:00:24.468 ********* 2025-08-29 15:03:09.511987 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.511996 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.512006 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.512016 | orchestrator | 2025-08-29 15:03:09.512026 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:03:09.512036 | orchestrator | Friday 29 August 2025 15:01:35 +0000 (0:00:00.350) 0:00:24.819 ********* 2025-08-29 15:03:09.512046 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:09.512055 | orchestrator | 2025-08-29 15:03:09.512065 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-08-29 15:03:09.512076 | orchestrator | Friday 29 August 2025 15:01:36 +0000 (0:00:00.655) 0:00:25.474 ********* 2025-08-29 15:03:09.512095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:09.512127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:09.512144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:09.512162 | orchestrator | 2025-08-29 15:03:09.512172 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-08-29 15:03:09.512183 | orchestrator | Friday 29 August 2025 15:01:38 +0000 (0:00:01.895) 0:00:27.369 ********* 2025-08-29 15:03:09.512208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:03:09.512224 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.512240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:03:09.512266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:03:09.512278 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.512288 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.512298 | orchestrator | 2025-08-29 15:03:09.512308 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-08-29 15:03:09.512318 | orchestrator | Friday 29 August 2025 15:01:39 +0000 (0:00:00.666) 0:00:28.036 ********* 2025-08-29 15:03:09.512343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:03:09.512366 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.512383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:03:09.512404 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.512423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:03:09.512434 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.512444 | orchestrator | 2025-08-29 15:03:09.512455 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-08-29 15:03:09.512465 | orchestrator | Friday 29 August 2025 15:01:40 +0000 (0:00:00.997) 0:00:29.034 ********* 2025-08-29 15:03:09.512481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:09.512507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:09.512526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:03:09.512543 | orchestrator | 2025-08-29 15:03:09.512553 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:03:09.512562 | orchestrator | Friday 29 August 2025 15:01:41 +0000 (0:00:01.570) 0:00:30.605 ********* 2025-08-29 15:03:09.512572 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:09.512582 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:09.512592 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:09.512602 | orchestrator | 2025-08-29 15:03:09.512611 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:03:09.512622 | orchestrator | Friday 29 August 2025 15:01:41 +0000 (0:00:00.306) 0:00:30.911 ********* 2025-08-29 15:03:09.512631 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:09.512703 | orchestrator | 2025-08-29 15:03:09.512719 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-08-29 15:03:09.512730 | orchestrator | Friday 29 August 2025 15:01:42 +0000 (0:00:00.543) 0:00:31.455 ********* 2025-08-29 15:03:09.512740 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:09.512750 | orchestrator | 2025-08-29 15:03:09.512766 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-08-29 15:03:09.512776 | orchestrator | Friday 29 August 2025 15:01:44 +0000 (0:00:02.146) 0:00:33.601 ********* 2025-08-29 15:03:09.512787 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:09.512797 | orchestrator | 2025-08-29 15:03:09.512805 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-08-29 15:03:09.512813 | orchestrator | Friday 29 August 2025 15:01:47 +0000 (0:00:02.734) 0:00:36.336 ********* 2025-08-29 15:03:09.512827 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:09.512841 | orchestrator | 2025-08-29 15:03:09.512854 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 15:03:09.512868 | orchestrator | Friday 29 August 2025 15:02:03 +0000 (0:00:15.756) 0:00:52.092 ********* 2025-08-29 15:03:09.512883 | orchestrator | 2025-08-29 15:03:09.512896 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 15:03:09.512910 | orchestrator | Friday 29 August 2025 15:02:03 +0000 (0:00:00.076) 0:00:52.169 ********* 2025-08-29 15:03:09.512923 | orchestrator | 2025-08-29 15:03:09.512935 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 15:03:09.512944 | orchestrator | Friday 29 August 2025 15:02:03 +0000 (0:00:00.119) 0:00:52.289 ********* 2025-08-29 15:03:09.512951 | orchestrator | 2025-08-29 15:03:09.512959 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-08-29 15:03:09.512967 | orchestrator | Friday 29 August 2025 15:02:03 +0000 (0:00:00.106) 0:00:52.396 ********* 2025-08-29 15:03:09.512975 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:09.512983 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:09.512991 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:09.512999 | orchestrator | 2025-08-29 15:03:09.513015 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:03:09.513024 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-08-29 15:03:09.513033 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 15:03:09.513040 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 15:03:09.513048 | orchestrator | 2025-08-29 15:03:09.513056 | orchestrator | 2025-08-29 15:03:09.513064 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:03:09.513072 | orchestrator | Friday 29 August 2025 15:03:07 +0000 (0:01:03.649) 0:01:56.046 ********* 2025-08-29 15:03:09.513080 | orchestrator | =============================================================================== 2025-08-29 15:03:09.513088 | orchestrator | horizon : Restart horizon container ------------------------------------ 63.65s 2025-08-29 15:03:09.513096 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.76s 2025-08-29 15:03:09.513104 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.73s 2025-08-29 15:03:09.513112 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.66s 2025-08-29 15:03:09.513120 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.30s 2025-08-29 15:03:09.513127 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.18s 2025-08-29 15:03:09.513135 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.15s 2025-08-29 15:03:09.513148 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.08s 2025-08-29 15:03:09.513156 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.90s 2025-08-29 15:03:09.513164 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.57s 2025-08-29 15:03:09.513172 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.16s 2025-08-29 15:03:09.513180 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.00s 2025-08-29 15:03:09.513188 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.95s 2025-08-29 15:03:09.513195 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2025-08-29 15:03:09.513203 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.66s 2025-08-29 15:03:09.513211 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2025-08-29 15:03:09.513219 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.63s 2025-08-29 15:03:09.513227 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.61s 2025-08-29 15:03:09.513234 | orchestrator | horizon : Update policy file name --------------------------------------- 0.61s 2025-08-29 15:03:09.513242 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.59s 2025-08-29 15:03:09.513250 | orchestrator | 2025-08-29 15:03:09 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:09.513258 | orchestrator | 2025-08-29 15:03:09 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:09.513267 | orchestrator | 2025-08-29 15:03:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:12.551413 | orchestrator | 2025-08-29 15:03:12 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:12.553418 | orchestrator | 2025-08-29 15:03:12 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:12.553472 | orchestrator | 2025-08-29 15:03:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:15.622228 | orchestrator | 2025-08-29 15:03:15 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:15.624535 | orchestrator | 2025-08-29 15:03:15 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:15.624635 | orchestrator | 2025-08-29 15:03:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:18.679049 | orchestrator | 2025-08-29 15:03:18 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:18.681140 | orchestrator | 2025-08-29 15:03:18 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:18.681189 | orchestrator | 2025-08-29 15:03:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:21.735484 | orchestrator | 2025-08-29 15:03:21 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:21.737104 | orchestrator | 2025-08-29 15:03:21 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:21.737158 | orchestrator | 2025-08-29 15:03:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:24.778251 | orchestrator | 2025-08-29 15:03:24 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:24.779360 | orchestrator | 2025-08-29 15:03:24 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:24.779401 | orchestrator | 2025-08-29 15:03:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:27.823994 | orchestrator | 2025-08-29 15:03:27 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:27.826263 | orchestrator | 2025-08-29 15:03:27 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:27.826345 | orchestrator | 2025-08-29 15:03:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:30.868555 | orchestrator | 2025-08-29 15:03:30 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:30.870577 | orchestrator | 2025-08-29 15:03:30 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:30.870657 | orchestrator | 2025-08-29 15:03:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:33.926935 | orchestrator | 2025-08-29 15:03:33 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:33.927976 | orchestrator | 2025-08-29 15:03:33 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:33.928035 | orchestrator | 2025-08-29 15:03:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:36.978496 | orchestrator | 2025-08-29 15:03:36 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:36.979433 | orchestrator | 2025-08-29 15:03:36 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:36.979534 | orchestrator | 2025-08-29 15:03:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:40.029974 | orchestrator | 2025-08-29 15:03:40 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:40.032812 | orchestrator | 2025-08-29 15:03:40 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:40.032906 | orchestrator | 2025-08-29 15:03:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:43.086181 | orchestrator | 2025-08-29 15:03:43 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:43.086877 | orchestrator | 2025-08-29 15:03:43 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:43.087051 | orchestrator | 2025-08-29 15:03:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:46.128939 | orchestrator | 2025-08-29 15:03:46 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:46.131788 | orchestrator | 2025-08-29 15:03:46 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:46.131840 | orchestrator | 2025-08-29 15:03:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:49.184051 | orchestrator | 2025-08-29 15:03:49 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:49.186281 | orchestrator | 2025-08-29 15:03:49 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:49.186333 | orchestrator | 2025-08-29 15:03:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:52.236791 | orchestrator | 2025-08-29 15:03:52 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:52.238416 | orchestrator | 2025-08-29 15:03:52 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:52.238473 | orchestrator | 2025-08-29 15:03:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:55.288062 | orchestrator | 2025-08-29 15:03:55 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:55.290164 | orchestrator | 2025-08-29 15:03:55 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:55.290222 | orchestrator | 2025-08-29 15:03:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:58.338009 | orchestrator | 2025-08-29 15:03:58 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:03:58.340324 | orchestrator | 2025-08-29 15:03:58 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:03:58.340701 | orchestrator | 2025-08-29 15:03:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:01.396787 | orchestrator | 2025-08-29 15:04:01 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state STARTED 2025-08-29 15:04:01.397912 | orchestrator | 2025-08-29 15:04:01 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state STARTED 2025-08-29 15:04:01.397957 | orchestrator | 2025-08-29 15:04:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:04.432238 | orchestrator | 2025-08-29 15:04:04 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:04.435116 | orchestrator | 2025-08-29 15:04:04 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:04.437243 | orchestrator | 2025-08-29 15:04:04 | INFO  | Task b126cc9a-413c-413a-8293-db3c3f902b32 is in state SUCCESS 2025-08-29 15:04:04.438572 | orchestrator | 2025-08-29 15:04:04.438656 | orchestrator | 2025-08-29 15:04:04.438669 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:04:04.438681 | orchestrator | 2025-08-29 15:04:04.438693 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:04:04.438705 | orchestrator | Friday 29 August 2025 15:01:10 +0000 (0:00:00.338) 0:00:00.338 ********* 2025-08-29 15:04:04.438717 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:04.439152 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:04.439168 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:04.439179 | orchestrator | 2025-08-29 15:04:04.439190 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:04:04.439202 | orchestrator | Friday 29 August 2025 15:01:11 +0000 (0:00:00.323) 0:00:00.661 ********* 2025-08-29 15:04:04.439213 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 15:04:04.439225 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 15:04:04.439264 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 15:04:04.439276 | orchestrator | 2025-08-29 15:04:04.439287 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-08-29 15:04:04.439298 | orchestrator | 2025-08-29 15:04:04.439323 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:04:04.439335 | orchestrator | Friday 29 August 2025 15:01:11 +0000 (0:00:00.680) 0:00:01.342 ********* 2025-08-29 15:04:04.439346 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:04:04.439358 | orchestrator | 2025-08-29 15:04:04.439369 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-08-29 15:04:04.439380 | orchestrator | Friday 29 August 2025 15:01:12 +0000 (0:00:00.728) 0:00:02.071 ********* 2025-08-29 15:04:04.439398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.439415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.439472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.439498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:04.439633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:04.439728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:04.439745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.439758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.439769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.439781 | orchestrator | 2025-08-29 15:04:04.439792 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-08-29 15:04:04.439804 | orchestrator | Friday 29 August 2025 15:01:14 +0000 (0:00:01.682) 0:00:03.754 ********* 2025-08-29 15:04:04.439848 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-08-29 15:04:04.439872 | orchestrator | 2025-08-29 15:04:04.439883 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-08-29 15:04:04.439894 | orchestrator | Friday 29 August 2025 15:01:15 +0000 (0:00:01.098) 0:00:04.852 ********* 2025-08-29 15:04:04.439911 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:04.439930 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:04.439948 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:04.439965 | orchestrator | 2025-08-29 15:04:04.439984 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-08-29 15:04:04.440002 | orchestrator | Friday 29 August 2025 15:01:16 +0000 (0:00:00.657) 0:00:05.509 ********* 2025-08-29 15:04:04.440019 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:04:04.440037 | orchestrator | 2025-08-29 15:04:04.440054 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:04:04.440072 | orchestrator | Friday 29 August 2025 15:01:16 +0000 (0:00:00.794) 0:00:06.303 ********* 2025-08-29 15:04:04.440090 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:04:04.440108 | orchestrator | 2025-08-29 15:04:04.440136 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-08-29 15:04:04.440154 | orchestrator | Friday 29 August 2025 15:01:17 +0000 (0:00:00.714) 0:00:07.018 ********* 2025-08-29 15:04:04.440173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.440196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.440217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.440257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:04.440275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:04.440287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:04.440298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.440310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.440321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.440340 | orchestrator | 2025-08-29 15:04:04.440351 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-08-29 15:04:04.440362 | orchestrator | Friday 29 August 2025 15:01:20 +0000 (0:00:03.287) 0:00:10.305 ********* 2025-08-29 15:04:04.440385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:04:04.440405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:04.440419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:04:04.440432 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.440446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:04:04.440460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:04.440486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:04:04.440499 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:04.440518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:04:04.440532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:04.440546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:04:04.440559 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:04.440572 | orchestrator | 2025-08-29 15:04:04.440584 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-08-29 15:04:04.440635 | orchestrator | Friday 29 August 2025 15:01:21 +0000 (0:00:01.017) 0:00:11.323 ********* 2025-08-29 15:04:04.440649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:04:04.440794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:04.440812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:04:04.440824 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.440842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:04:04.440855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:04.440866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:04:04.440885 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:04.440897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:04:04.440916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:04.440933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:04:04.440945 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:04.440956 | orchestrator | 2025-08-29 15:04:04.440967 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-08-29 15:04:04.440978 | orchestrator | Friday 29 August 2025 15:01:22 +0000 (0:00:00.787) 0:00:12.110 ********* 2025-08-29 15:04:04.440989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.441009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.441030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.441052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:04.441064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:04.441076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:04.441087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.441117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.441129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.441140 | orchestrator | 2025-08-29 15:04:04.441151 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-08-29 15:04:04.441163 | orchestrator | Friday 29 August 2025 15:01:25 +0000 (0:00:03.249) 0:00:15.360 ********* 2025-08-29 15:04:04.441187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.441200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:04.441212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.441230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:04.441249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.441261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:04.441277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.441289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.441306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.441318 | orchestrator | 2025-08-29 15:04:04.441329 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-08-29 15:04:04.441340 | orchestrator | Friday 29 August 2025 15:01:32 +0000 (0:00:06.725) 0:00:22.085 ********* 2025-08-29 15:04:04.441351 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:04.441362 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:04.441373 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:04.441384 | orchestrator | 2025-08-29 15:04:04.441395 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-08-29 15:04:04.441405 | orchestrator | Friday 29 August 2025 15:01:34 +0000 (0:00:01.593) 0:00:23.679 ********* 2025-08-29 15:04:04.441416 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.441427 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:04.441438 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:04.441448 | orchestrator | 2025-08-29 15:04:04.441460 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-08-29 15:04:04.441471 | orchestrator | Friday 29 August 2025 15:01:34 +0000 (0:00:00.561) 0:00:24.241 ********* 2025-08-29 15:04:04.441481 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.441492 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:04.441503 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:04.441514 | orchestrator | 2025-08-29 15:04:04.441524 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-08-29 15:04:04.441535 | orchestrator | Friday 29 August 2025 15:01:35 +0000 (0:00:00.359) 0:00:24.601 ********* 2025-08-29 15:04:04.441546 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.441557 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:04.441568 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:04.441578 | orchestrator | 2025-08-29 15:04:04.441613 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-08-29 15:04:04.441625 | orchestrator | Friday 29 August 2025 15:01:35 +0000 (0:00:00.585) 0:00:25.186 ********* 2025-08-29 15:04:04.441646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.441664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:04.441683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.441695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:04.441708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.441726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:04:04.441742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.441760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.441772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.441783 | orchestrator | 2025-08-29 15:04:04.441794 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:04:04.441805 | orchestrator | Friday 29 August 2025 15:01:38 +0000 (0:00:02.513) 0:00:27.700 ********* 2025-08-29 15:04:04.441816 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.441827 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:04.441838 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:04.441848 | orchestrator | 2025-08-29 15:04:04.441859 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-08-29 15:04:04.441870 | orchestrator | Friday 29 August 2025 15:01:38 +0000 (0:00:00.297) 0:00:27.997 ********* 2025-08-29 15:04:04.441881 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 15:04:04.441892 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 15:04:04.441903 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 15:04:04.441914 | orchestrator | 2025-08-29 15:04:04.441925 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-08-29 15:04:04.441936 | orchestrator | Friday 29 August 2025 15:01:40 +0000 (0:00:01.766) 0:00:29.763 ********* 2025-08-29 15:04:04.441946 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:04:04.441957 | orchestrator | 2025-08-29 15:04:04.441968 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-08-29 15:04:04.441979 | orchestrator | Friday 29 August 2025 15:01:41 +0000 (0:00:01.081) 0:00:30.845 ********* 2025-08-29 15:04:04.441990 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.442001 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:04.442011 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:04.442095 | orchestrator | 2025-08-29 15:04:04.442107 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-08-29 15:04:04.442118 | orchestrator | Friday 29 August 2025 15:01:42 +0000 (0:00:00.856) 0:00:31.702 ********* 2025-08-29 15:04:04.442129 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 15:04:04.442140 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 15:04:04.442150 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:04:04.442161 | orchestrator | 2025-08-29 15:04:04.442180 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-08-29 15:04:04.442198 | orchestrator | Friday 29 August 2025 15:01:43 +0000 (0:00:01.147) 0:00:32.849 ********* 2025-08-29 15:04:04.442227 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:04.442246 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:04.442264 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:04.442282 | orchestrator | 2025-08-29 15:04:04.442299 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-08-29 15:04:04.442315 | orchestrator | Friday 29 August 2025 15:01:43 +0000 (0:00:00.329) 0:00:33.179 ********* 2025-08-29 15:04:04.442332 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 15:04:04.442350 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 15:04:04.442368 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 15:04:04.442387 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 15:04:04.442407 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 15:04:04.442435 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 15:04:04.442453 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 15:04:04.442464 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 15:04:04.442475 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 15:04:04.442486 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 15:04:04.442497 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 15:04:04.442508 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 15:04:04.442519 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 15:04:04.442530 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 15:04:04.442540 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 15:04:04.442551 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:04:04.442562 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:04:04.442574 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:04:04.442585 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:04:04.442628 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:04:04.442639 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:04:04.442650 | orchestrator | 2025-08-29 15:04:04.442661 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-08-29 15:04:04.442671 | orchestrator | Friday 29 August 2025 15:01:52 +0000 (0:00:09.182) 0:00:42.361 ********* 2025-08-29 15:04:04.442682 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:04:04.442693 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:04:04.442703 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:04:04.442714 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:04:04.442734 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:04:04.442745 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:04:04.442756 | orchestrator | 2025-08-29 15:04:04.442767 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-08-29 15:04:04.442778 | orchestrator | Friday 29 August 2025 15:01:55 +0000 (0:00:03.101) 0:00:45.463 ********* 2025-08-29 15:04:04.442799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.442818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.442831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:04:04.442843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:04.442862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:04.442874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:04:04.442891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.442912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.442924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:04:04.442935 | orchestrator | 2025-08-29 15:04:04.442946 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:04:04.442957 | orchestrator | Friday 29 August 2025 15:01:58 +0000 (0:00:02.437) 0:00:47.900 ********* 2025-08-29 15:04:04.442968 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.442980 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:04.442990 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:04.443001 | orchestrator | 2025-08-29 15:04:04.443012 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-08-29 15:04:04.443023 | orchestrator | Friday 29 August 2025 15:01:58 +0000 (0:00:00.317) 0:00:48.218 ********* 2025-08-29 15:04:04.443034 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:04.443050 | orchestrator | 2025-08-29 15:04:04.443061 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-08-29 15:04:04.443072 | orchestrator | Friday 29 August 2025 15:02:00 +0000 (0:00:02.113) 0:00:50.331 ********* 2025-08-29 15:04:04.443082 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:04.443093 | orchestrator | 2025-08-29 15:04:04.443104 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-08-29 15:04:04.443115 | orchestrator | Friday 29 August 2025 15:02:02 +0000 (0:00:02.101) 0:00:52.432 ********* 2025-08-29 15:04:04.443125 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:04.443136 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:04.443147 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:04.443157 | orchestrator | 2025-08-29 15:04:04.443168 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-08-29 15:04:04.443179 | orchestrator | Friday 29 August 2025 15:02:03 +0000 (0:00:00.899) 0:00:53.332 ********* 2025-08-29 15:04:04.443190 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:04.443200 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:04.443211 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:04.443221 | orchestrator | 2025-08-29 15:04:04.443232 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-08-29 15:04:04.443243 | orchestrator | Friday 29 August 2025 15:02:04 +0000 (0:00:00.688) 0:00:54.021 ********* 2025-08-29 15:04:04.443253 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.443264 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:04.443275 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:04.443285 | orchestrator | 2025-08-29 15:04:04.443296 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-08-29 15:04:04.443307 | orchestrator | Friday 29 August 2025 15:02:05 +0000 (0:00:00.499) 0:00:54.520 ********* 2025-08-29 15:04:04.443318 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:04.443328 | orchestrator | 2025-08-29 15:04:04.443339 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-08-29 15:04:04.443350 | orchestrator | Friday 29 August 2025 15:02:20 +0000 (0:00:15.276) 0:01:09.796 ********* 2025-08-29 15:04:04.443360 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:04.443371 | orchestrator | 2025-08-29 15:04:04.443382 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 15:04:04.443392 | orchestrator | Friday 29 August 2025 15:02:30 +0000 (0:00:09.919) 0:01:19.716 ********* 2025-08-29 15:04:04.443403 | orchestrator | 2025-08-29 15:04:04.443414 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 15:04:04.443424 | orchestrator | Friday 29 August 2025 15:02:30 +0000 (0:00:00.084) 0:01:19.800 ********* 2025-08-29 15:04:04.443435 | orchestrator | 2025-08-29 15:04:04.443446 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 15:04:04.443456 | orchestrator | Friday 29 August 2025 15:02:30 +0000 (0:00:00.076) 0:01:19.876 ********* 2025-08-29 15:04:04.443467 | orchestrator | 2025-08-29 15:04:04.443484 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-08-29 15:04:04.443495 | orchestrator | Friday 29 August 2025 15:02:30 +0000 (0:00:00.069) 0:01:19.946 ********* 2025-08-29 15:04:04.443506 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:04.443516 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:04.443527 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:04.443538 | orchestrator | 2025-08-29 15:04:04.443549 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-08-29 15:04:04.443568 | orchestrator | Friday 29 August 2025 15:02:57 +0000 (0:00:27.380) 0:01:47.326 ********* 2025-08-29 15:04:04.443649 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:04.443677 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:04.443695 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:04.443714 | orchestrator | 2025-08-29 15:04:04.443732 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-08-29 15:04:04.443765 | orchestrator | Friday 29 August 2025 15:03:03 +0000 (0:00:05.243) 0:01:52.570 ********* 2025-08-29 15:04:04.443784 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:04.443798 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:04.443815 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:04.443826 | orchestrator | 2025-08-29 15:04:04.443837 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:04:04.443848 | orchestrator | Friday 29 August 2025 15:03:15 +0000 (0:00:11.947) 0:02:04.517 ********* 2025-08-29 15:04:04.443859 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:04:04.443869 | orchestrator | 2025-08-29 15:04:04.443880 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-08-29 15:04:04.443891 | orchestrator | Friday 29 August 2025 15:03:15 +0000 (0:00:00.869) 0:02:05.387 ********* 2025-08-29 15:04:04.443901 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:04.443912 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:04.443923 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:04.443933 | orchestrator | 2025-08-29 15:04:04.443944 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-08-29 15:04:04.443955 | orchestrator | Friday 29 August 2025 15:03:16 +0000 (0:00:00.773) 0:02:06.160 ********* 2025-08-29 15:04:04.443965 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:04.443976 | orchestrator | 2025-08-29 15:04:04.443986 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-08-29 15:04:04.443997 | orchestrator | Friday 29 August 2025 15:03:18 +0000 (0:00:01.911) 0:02:08.072 ********* 2025-08-29 15:04:04.444008 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-08-29 15:04:04.444019 | orchestrator | 2025-08-29 15:04:04.444030 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-08-29 15:04:04.444041 | orchestrator | Friday 29 August 2025 15:03:29 +0000 (0:00:10.474) 0:02:18.546 ********* 2025-08-29 15:04:04.444051 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-08-29 15:04:04.444062 | orchestrator | 2025-08-29 15:04:04.444073 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-08-29 15:04:04.444083 | orchestrator | Friday 29 August 2025 15:03:50 +0000 (0:00:21.288) 0:02:39.834 ********* 2025-08-29 15:04:04.444094 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-08-29 15:04:04.444105 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-08-29 15:04:04.444116 | orchestrator | 2025-08-29 15:04:04.444126 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-08-29 15:04:04.444137 | orchestrator | Friday 29 August 2025 15:03:57 +0000 (0:00:06.659) 0:02:46.494 ********* 2025-08-29 15:04:04.444148 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.444158 | orchestrator | 2025-08-29 15:04:04.444169 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-08-29 15:04:04.444180 | orchestrator | Friday 29 August 2025 15:03:57 +0000 (0:00:00.134) 0:02:46.629 ********* 2025-08-29 15:04:04.444191 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.444201 | orchestrator | 2025-08-29 15:04:04.444212 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-08-29 15:04:04.444223 | orchestrator | Friday 29 August 2025 15:03:57 +0000 (0:00:00.122) 0:02:46.751 ********* 2025-08-29 15:04:04.444234 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.444244 | orchestrator | 2025-08-29 15:04:04.444255 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-08-29 15:04:04.444266 | orchestrator | Friday 29 August 2025 15:03:57 +0000 (0:00:00.127) 0:02:46.878 ********* 2025-08-29 15:04:04.444276 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.444287 | orchestrator | 2025-08-29 15:04:04.444298 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-08-29 15:04:04.444308 | orchestrator | Friday 29 August 2025 15:03:57 +0000 (0:00:00.545) 0:02:47.424 ********* 2025-08-29 15:04:04.444328 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:04.444339 | orchestrator | 2025-08-29 15:04:04.444349 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:04:04.444360 | orchestrator | Friday 29 August 2025 15:04:01 +0000 (0:00:03.198) 0:02:50.622 ********* 2025-08-29 15:04:04.444371 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:04.444381 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:04.444392 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:04.444403 | orchestrator | 2025-08-29 15:04:04.444413 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:04:04.444425 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-08-29 15:04:04.444438 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 15:04:04.444458 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 15:04:04.444470 | orchestrator | 2025-08-29 15:04:04.444481 | orchestrator | 2025-08-29 15:04:04.444491 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:04:04.444502 | orchestrator | Friday 29 August 2025 15:04:01 +0000 (0:00:00.443) 0:02:51.066 ********* 2025-08-29 15:04:04.444512 | orchestrator | =============================================================================== 2025-08-29 15:04:04.444523 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 27.38s 2025-08-29 15:04:04.444534 | orchestrator | service-ks-register : keystone | Creating services --------------------- 21.29s 2025-08-29 15:04:04.444544 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.28s 2025-08-29 15:04:04.444555 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.95s 2025-08-29 15:04:04.444566 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.47s 2025-08-29 15:04:04.444584 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.92s 2025-08-29 15:04:04.444681 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.18s 2025-08-29 15:04:04.444699 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.73s 2025-08-29 15:04:04.444717 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.66s 2025-08-29 15:04:04.444736 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.24s 2025-08-29 15:04:04.444756 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.29s 2025-08-29 15:04:04.444773 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.25s 2025-08-29 15:04:04.444791 | orchestrator | keystone : Creating default user role ----------------------------------- 3.20s 2025-08-29 15:04:04.444808 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.10s 2025-08-29 15:04:04.444824 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.51s 2025-08-29 15:04:04.444839 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.44s 2025-08-29 15:04:04.444855 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.11s 2025-08-29 15:04:04.444871 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.10s 2025-08-29 15:04:04.444887 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.91s 2025-08-29 15:04:04.444903 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.77s 2025-08-29 15:04:04.444921 | orchestrator | 2025-08-29 15:04:04 | INFO  | Task 99a279b0-4376-4399-9f23-69e0d525ea65 is in state SUCCESS 2025-08-29 15:04:04.444939 | orchestrator | 2025-08-29 15:04:04 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:04.444969 | orchestrator | 2025-08-29 15:04:04 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:04.444986 | orchestrator | 2025-08-29 15:04:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:07.483997 | orchestrator | 2025-08-29 15:04:07 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:07.484566 | orchestrator | 2025-08-29 15:04:07 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:07.485419 | orchestrator | 2025-08-29 15:04:07 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:07.486346 | orchestrator | 2025-08-29 15:04:07 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:07.487310 | orchestrator | 2025-08-29 15:04:07 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:07.487359 | orchestrator | 2025-08-29 15:04:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:10.523562 | orchestrator | 2025-08-29 15:04:10 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:10.524190 | orchestrator | 2025-08-29 15:04:10 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:10.525034 | orchestrator | 2025-08-29 15:04:10 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:10.525663 | orchestrator | 2025-08-29 15:04:10 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:10.526448 | orchestrator | 2025-08-29 15:04:10 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:10.526561 | orchestrator | 2025-08-29 15:04:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:13.583329 | orchestrator | 2025-08-29 15:04:13 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:13.584899 | orchestrator | 2025-08-29 15:04:13 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:13.586146 | orchestrator | 2025-08-29 15:04:13 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:13.587824 | orchestrator | 2025-08-29 15:04:13 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:13.589077 | orchestrator | 2025-08-29 15:04:13 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:13.589117 | orchestrator | 2025-08-29 15:04:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:16.631568 | orchestrator | 2025-08-29 15:04:16 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:16.633084 | orchestrator | 2025-08-29 15:04:16 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:16.635161 | orchestrator | 2025-08-29 15:04:16 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:16.637441 | orchestrator | 2025-08-29 15:04:16 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:16.639600 | orchestrator | 2025-08-29 15:04:16 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:16.639766 | orchestrator | 2025-08-29 15:04:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:19.678545 | orchestrator | 2025-08-29 15:04:19 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:19.680223 | orchestrator | 2025-08-29 15:04:19 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:19.681791 | orchestrator | 2025-08-29 15:04:19 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:19.683221 | orchestrator | 2025-08-29 15:04:19 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:19.684169 | orchestrator | 2025-08-29 15:04:19 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:19.684194 | orchestrator | 2025-08-29 15:04:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:22.728935 | orchestrator | 2025-08-29 15:04:22 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:22.731052 | orchestrator | 2025-08-29 15:04:22 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:22.733350 | orchestrator | 2025-08-29 15:04:22 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:22.736360 | orchestrator | 2025-08-29 15:04:22 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:22.738695 | orchestrator | 2025-08-29 15:04:22 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:22.738753 | orchestrator | 2025-08-29 15:04:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:25.789097 | orchestrator | 2025-08-29 15:04:25 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:25.790476 | orchestrator | 2025-08-29 15:04:25 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:25.792523 | orchestrator | 2025-08-29 15:04:25 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:25.794065 | orchestrator | 2025-08-29 15:04:25 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:25.795962 | orchestrator | 2025-08-29 15:04:25 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:25.795993 | orchestrator | 2025-08-29 15:04:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:28.843485 | orchestrator | 2025-08-29 15:04:28 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:28.845307 | orchestrator | 2025-08-29 15:04:28 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:28.848509 | orchestrator | 2025-08-29 15:04:28 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:28.850772 | orchestrator | 2025-08-29 15:04:28 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:28.853364 | orchestrator | 2025-08-29 15:04:28 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:28.853426 | orchestrator | 2025-08-29 15:04:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:31.903231 | orchestrator | 2025-08-29 15:04:31 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:31.906845 | orchestrator | 2025-08-29 15:04:31 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:31.909632 | orchestrator | 2025-08-29 15:04:31 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:31.912702 | orchestrator | 2025-08-29 15:04:31 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:31.914755 | orchestrator | 2025-08-29 15:04:31 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:31.914863 | orchestrator | 2025-08-29 15:04:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:34.954538 | orchestrator | 2025-08-29 15:04:34 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:34.956790 | orchestrator | 2025-08-29 15:04:34 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:34.959025 | orchestrator | 2025-08-29 15:04:34 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:34.960988 | orchestrator | 2025-08-29 15:04:34 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:34.962809 | orchestrator | 2025-08-29 15:04:34 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:34.962855 | orchestrator | 2025-08-29 15:04:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:38.009502 | orchestrator | 2025-08-29 15:04:38 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:38.010373 | orchestrator | 2025-08-29 15:04:38 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:38.012878 | orchestrator | 2025-08-29 15:04:38 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:38.014849 | orchestrator | 2025-08-29 15:04:38 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:38.016161 | orchestrator | 2025-08-29 15:04:38 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:38.016192 | orchestrator | 2025-08-29 15:04:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:41.059451 | orchestrator | 2025-08-29 15:04:41 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:41.061532 | orchestrator | 2025-08-29 15:04:41 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:41.067793 | orchestrator | 2025-08-29 15:04:41 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:41.068798 | orchestrator | 2025-08-29 15:04:41 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:41.070653 | orchestrator | 2025-08-29 15:04:41 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:41.070711 | orchestrator | 2025-08-29 15:04:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:44.114378 | orchestrator | 2025-08-29 15:04:44 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:44.115947 | orchestrator | 2025-08-29 15:04:44 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:44.117599 | orchestrator | 2025-08-29 15:04:44 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:44.120225 | orchestrator | 2025-08-29 15:04:44 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:44.121680 | orchestrator | 2025-08-29 15:04:44 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:44.121717 | orchestrator | 2025-08-29 15:04:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:47.154130 | orchestrator | 2025-08-29 15:04:47 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:47.154942 | orchestrator | 2025-08-29 15:04:47 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:47.155879 | orchestrator | 2025-08-29 15:04:47 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:47.157301 | orchestrator | 2025-08-29 15:04:47 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:47.158892 | orchestrator | 2025-08-29 15:04:47 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:47.159002 | orchestrator | 2025-08-29 15:04:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:50.202097 | orchestrator | 2025-08-29 15:04:50 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:50.205266 | orchestrator | 2025-08-29 15:04:50 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:50.209878 | orchestrator | 2025-08-29 15:04:50 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:50.218847 | orchestrator | 2025-08-29 15:04:50 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:50.220350 | orchestrator | 2025-08-29 15:04:50 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:50.220420 | orchestrator | 2025-08-29 15:04:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:53.273969 | orchestrator | 2025-08-29 15:04:53 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:53.274736 | orchestrator | 2025-08-29 15:04:53 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:53.275858 | orchestrator | 2025-08-29 15:04:53 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:53.276972 | orchestrator | 2025-08-29 15:04:53 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:53.278265 | orchestrator | 2025-08-29 15:04:53 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:53.279113 | orchestrator | 2025-08-29 15:04:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:56.313480 | orchestrator | 2025-08-29 15:04:56 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:56.315320 | orchestrator | 2025-08-29 15:04:56 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:56.315950 | orchestrator | 2025-08-29 15:04:56 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:56.316808 | orchestrator | 2025-08-29 15:04:56 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:56.317758 | orchestrator | 2025-08-29 15:04:56 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:56.317783 | orchestrator | 2025-08-29 15:04:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:59.351391 | orchestrator | 2025-08-29 15:04:59 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:04:59.352933 | orchestrator | 2025-08-29 15:04:59 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:04:59.353803 | orchestrator | 2025-08-29 15:04:59 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:04:59.354817 | orchestrator | 2025-08-29 15:04:59 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:04:59.356225 | orchestrator | 2025-08-29 15:04:59 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:04:59.356279 | orchestrator | 2025-08-29 15:04:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:02.390494 | orchestrator | 2025-08-29 15:05:02 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:02.390764 | orchestrator | 2025-08-29 15:05:02 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:02.391657 | orchestrator | 2025-08-29 15:05:02 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:02.392257 | orchestrator | 2025-08-29 15:05:02 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:05:02.393126 | orchestrator | 2025-08-29 15:05:02 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:02.393171 | orchestrator | 2025-08-29 15:05:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:05.423950 | orchestrator | 2025-08-29 15:05:05 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:05.424622 | orchestrator | 2025-08-29 15:05:05 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:05.425420 | orchestrator | 2025-08-29 15:05:05 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:05.425995 | orchestrator | 2025-08-29 15:05:05 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:05:05.426855 | orchestrator | 2025-08-29 15:05:05 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:05.426902 | orchestrator | 2025-08-29 15:05:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:08.460626 | orchestrator | 2025-08-29 15:05:08 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:08.460947 | orchestrator | 2025-08-29 15:05:08 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:08.462538 | orchestrator | 2025-08-29 15:05:08 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:08.463597 | orchestrator | 2025-08-29 15:05:08 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:05:08.465266 | orchestrator | 2025-08-29 15:05:08 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:08.465320 | orchestrator | 2025-08-29 15:05:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:11.507877 | orchestrator | 2025-08-29 15:05:11 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:11.508172 | orchestrator | 2025-08-29 15:05:11 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:11.509343 | orchestrator | 2025-08-29 15:05:11 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:11.510121 | orchestrator | 2025-08-29 15:05:11 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:05:11.511154 | orchestrator | 2025-08-29 15:05:11 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:11.511194 | orchestrator | 2025-08-29 15:05:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:14.568464 | orchestrator | 2025-08-29 15:05:14 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:14.568955 | orchestrator | 2025-08-29 15:05:14 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:14.569857 | orchestrator | 2025-08-29 15:05:14 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:14.570934 | orchestrator | 2025-08-29 15:05:14 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:05:14.574252 | orchestrator | 2025-08-29 15:05:14 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:14.574325 | orchestrator | 2025-08-29 15:05:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:17.663284 | orchestrator | 2025-08-29 15:05:17 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:17.663716 | orchestrator | 2025-08-29 15:05:17 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:17.664647 | orchestrator | 2025-08-29 15:05:17 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:17.666940 | orchestrator | 2025-08-29 15:05:17 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:05:17.667643 | orchestrator | 2025-08-29 15:05:17 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:17.667669 | orchestrator | 2025-08-29 15:05:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:20.701946 | orchestrator | 2025-08-29 15:05:20 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:20.702100 | orchestrator | 2025-08-29 15:05:20 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:20.703093 | orchestrator | 2025-08-29 15:05:20 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:20.704289 | orchestrator | 2025-08-29 15:05:20 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:05:20.705468 | orchestrator | 2025-08-29 15:05:20 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:20.705506 | orchestrator | 2025-08-29 15:05:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:23.766600 | orchestrator | 2025-08-29 15:05:23 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:23.774660 | orchestrator | 2025-08-29 15:05:23 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:23.775219 | orchestrator | 2025-08-29 15:05:23 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:23.776139 | orchestrator | 2025-08-29 15:05:23 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:05:23.776902 | orchestrator | 2025-08-29 15:05:23 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:23.776935 | orchestrator | 2025-08-29 15:05:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:26.816761 | orchestrator | 2025-08-29 15:05:26 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:26.818946 | orchestrator | 2025-08-29 15:05:26 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:26.820133 | orchestrator | 2025-08-29 15:05:26 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:26.821384 | orchestrator | 2025-08-29 15:05:26 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state STARTED 2025-08-29 15:05:26.823860 | orchestrator | 2025-08-29 15:05:26 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:26.823890 | orchestrator | 2025-08-29 15:05:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:29.865862 | orchestrator | 2025-08-29 15:05:29 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:29.866962 | orchestrator | 2025-08-29 15:05:29 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:29.868588 | orchestrator | 2025-08-29 15:05:29 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:29.870228 | orchestrator | 2025-08-29 15:05:29 | INFO  | Task 8994574b-5f61-4bb9-b4cd-c5f68dae74b0 is in state SUCCESS 2025-08-29 15:05:29.870377 | orchestrator | 2025-08-29 15:05:29.870397 | orchestrator | 2025-08-29 15:05:29.870407 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-08-29 15:05:29.870445 | orchestrator | 2025-08-29 15:05:29.870455 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-08-29 15:05:29.870463 | orchestrator | Friday 29 August 2025 15:03:10 +0000 (0:00:00.271) 0:00:00.271 ********* 2025-08-29 15:05:29.870471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-08-29 15:05:29.870482 | orchestrator | 2025-08-29 15:05:29.870491 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-08-29 15:05:29.870540 | orchestrator | Friday 29 August 2025 15:03:10 +0000 (0:00:00.261) 0:00:00.533 ********* 2025-08-29 15:05:29.870552 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-08-29 15:05:29.870561 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-08-29 15:05:29.870571 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-08-29 15:05:29.870580 | orchestrator | 2025-08-29 15:05:29.870589 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-08-29 15:05:29.870598 | orchestrator | Friday 29 August 2025 15:03:11 +0000 (0:00:01.327) 0:00:01.860 ********* 2025-08-29 15:05:29.870608 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-08-29 15:05:29.870616 | orchestrator | 2025-08-29 15:05:29.870622 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-08-29 15:05:29.870627 | orchestrator | Friday 29 August 2025 15:03:13 +0000 (0:00:01.256) 0:00:03.116 ********* 2025-08-29 15:05:29.870633 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.870639 | orchestrator | 2025-08-29 15:05:29.870644 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-08-29 15:05:29.870650 | orchestrator | Friday 29 August 2025 15:03:14 +0000 (0:00:01.140) 0:00:04.257 ********* 2025-08-29 15:05:29.870655 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.870661 | orchestrator | 2025-08-29 15:05:29.870666 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-08-29 15:05:29.870672 | orchestrator | Friday 29 August 2025 15:03:15 +0000 (0:00:00.981) 0:00:05.239 ********* 2025-08-29 15:05:29.870677 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-08-29 15:05:29.870683 | orchestrator | ok: [testbed-manager] 2025-08-29 15:05:29.870689 | orchestrator | 2025-08-29 15:05:29.870694 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-08-29 15:05:29.870700 | orchestrator | Friday 29 August 2025 15:03:53 +0000 (0:00:38.072) 0:00:43.312 ********* 2025-08-29 15:05:29.870705 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-08-29 15:05:29.870711 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-08-29 15:05:29.870717 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-08-29 15:05:29.870722 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-08-29 15:05:29.870728 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-08-29 15:05:29.870733 | orchestrator | 2025-08-29 15:05:29.870738 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-08-29 15:05:29.870744 | orchestrator | Friday 29 August 2025 15:03:57 +0000 (0:00:04.343) 0:00:47.656 ********* 2025-08-29 15:05:29.870749 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-08-29 15:05:29.870755 | orchestrator | 2025-08-29 15:05:29.870762 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-08-29 15:05:29.870771 | orchestrator | Friday 29 August 2025 15:03:58 +0000 (0:00:00.506) 0:00:48.162 ********* 2025-08-29 15:05:29.870779 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:05:29.870787 | orchestrator | 2025-08-29 15:05:29.870795 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-08-29 15:05:29.870804 | orchestrator | Friday 29 August 2025 15:03:58 +0000 (0:00:00.125) 0:00:48.287 ********* 2025-08-29 15:05:29.870812 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:05:29.870832 | orchestrator | 2025-08-29 15:05:29.870841 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-08-29 15:05:29.870850 | orchestrator | Friday 29 August 2025 15:03:58 +0000 (0:00:00.301) 0:00:48.589 ********* 2025-08-29 15:05:29.870860 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.870866 | orchestrator | 2025-08-29 15:05:29.870871 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-08-29 15:05:29.870877 | orchestrator | Friday 29 August 2025 15:04:00 +0000 (0:00:01.868) 0:00:50.457 ********* 2025-08-29 15:05:29.870882 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.870887 | orchestrator | 2025-08-29 15:05:29.870893 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-08-29 15:05:29.870898 | orchestrator | Friday 29 August 2025 15:04:01 +0000 (0:00:00.784) 0:00:51.241 ********* 2025-08-29 15:05:29.870903 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.870909 | orchestrator | 2025-08-29 15:05:29.870914 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-08-29 15:05:29.870919 | orchestrator | Friday 29 August 2025 15:04:01 +0000 (0:00:00.636) 0:00:51.878 ********* 2025-08-29 15:05:29.870926 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-08-29 15:05:29.870936 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-08-29 15:05:29.870958 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-08-29 15:05:29.870966 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-08-29 15:05:29.870974 | orchestrator | 2025-08-29 15:05:29.870982 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:05:29.870992 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:05:29.871002 | orchestrator | 2025-08-29 15:05:29.871010 | orchestrator | 2025-08-29 15:05:29.871031 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:05:29.871040 | orchestrator | Friday 29 August 2025 15:04:03 +0000 (0:00:02.049) 0:00:53.927 ********* 2025-08-29 15:05:29.871048 | orchestrator | =============================================================================== 2025-08-29 15:05:29.871057 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 38.07s 2025-08-29 15:05:29.871065 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.34s 2025-08-29 15:05:29.871074 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 2.05s 2025-08-29 15:05:29.871082 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.87s 2025-08-29 15:05:29.871090 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.33s 2025-08-29 15:05:29.871199 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.26s 2025-08-29 15:05:29.871208 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.14s 2025-08-29 15:05:29.871217 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.98s 2025-08-29 15:05:29.871225 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2025-08-29 15:05:29.871234 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2025-08-29 15:05:29.871242 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2025-08-29 15:05:29.871251 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2025-08-29 15:05:29.871259 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2025-08-29 15:05:29.871267 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-08-29 15:05:29.871275 | orchestrator | 2025-08-29 15:05:29.871284 | orchestrator | 2025-08-29 15:05:29.871292 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-08-29 15:05:29.871301 | orchestrator | 2025-08-29 15:05:29.871309 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-08-29 15:05:29.871327 | orchestrator | Friday 29 August 2025 15:04:09 +0000 (0:00:00.323) 0:00:00.323 ********* 2025-08-29 15:05:29.871335 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.871344 | orchestrator | 2025-08-29 15:05:29.871352 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-08-29 15:05:29.871361 | orchestrator | Friday 29 August 2025 15:04:10 +0000 (0:00:01.746) 0:00:02.070 ********* 2025-08-29 15:05:29.871369 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.871377 | orchestrator | 2025-08-29 15:05:29.871386 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-08-29 15:05:29.871394 | orchestrator | Friday 29 August 2025 15:04:11 +0000 (0:00:01.005) 0:00:03.076 ********* 2025-08-29 15:05:29.871402 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.871410 | orchestrator | 2025-08-29 15:05:29.871419 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-08-29 15:05:29.871427 | orchestrator | Friday 29 August 2025 15:04:12 +0000 (0:00:00.937) 0:00:04.013 ********* 2025-08-29 15:05:29.871435 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.871443 | orchestrator | 2025-08-29 15:05:29.871451 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-08-29 15:05:29.871460 | orchestrator | Friday 29 August 2025 15:04:13 +0000 (0:00:01.094) 0:00:05.108 ********* 2025-08-29 15:05:29.871467 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.871476 | orchestrator | 2025-08-29 15:05:29.871484 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-08-29 15:05:29.871492 | orchestrator | Friday 29 August 2025 15:04:15 +0000 (0:00:01.194) 0:00:06.302 ********* 2025-08-29 15:05:29.871521 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.871530 | orchestrator | 2025-08-29 15:05:29.871539 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-08-29 15:05:29.871547 | orchestrator | Friday 29 August 2025 15:04:16 +0000 (0:00:01.179) 0:00:07.482 ********* 2025-08-29 15:05:29.871555 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.871564 | orchestrator | 2025-08-29 15:05:29.871573 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-08-29 15:05:29.871582 | orchestrator | Friday 29 August 2025 15:04:18 +0000 (0:00:02.063) 0:00:09.545 ********* 2025-08-29 15:05:29.871591 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.871601 | orchestrator | 2025-08-29 15:05:29.871610 | orchestrator | TASK [Create admin user] ******************************************************* 2025-08-29 15:05:29.871618 | orchestrator | Friday 29 August 2025 15:04:19 +0000 (0:00:01.398) 0:00:10.944 ********* 2025-08-29 15:05:29.871628 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:29.871634 | orchestrator | 2025-08-29 15:05:29.871640 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-08-29 15:05:29.871645 | orchestrator | Friday 29 August 2025 15:05:02 +0000 (0:00:42.622) 0:00:53.567 ********* 2025-08-29 15:05:29.871651 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:05:29.871656 | orchestrator | 2025-08-29 15:05:29.871661 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 15:05:29.871667 | orchestrator | 2025-08-29 15:05:29.871672 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 15:05:29.871677 | orchestrator | Friday 29 August 2025 15:05:02 +0000 (0:00:00.193) 0:00:53.760 ********* 2025-08-29 15:05:29.871683 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:05:29.871688 | orchestrator | 2025-08-29 15:05:29.871700 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 15:05:29.871705 | orchestrator | 2025-08-29 15:05:29.871711 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 15:05:29.871716 | orchestrator | Friday 29 August 2025 15:05:14 +0000 (0:00:11.730) 0:01:05.490 ********* 2025-08-29 15:05:29.871721 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:05:29.871727 | orchestrator | 2025-08-29 15:05:29.871732 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 15:05:29.871743 | orchestrator | 2025-08-29 15:05:29.871755 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 15:05:29.871761 | orchestrator | Friday 29 August 2025 15:05:15 +0000 (0:00:01.337) 0:01:06.828 ********* 2025-08-29 15:05:29.871766 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:05:29.871772 | orchestrator | 2025-08-29 15:05:29.871781 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:05:29.871790 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 15:05:29.871799 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:29.871808 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:29.871817 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:29.871826 | orchestrator | 2025-08-29 15:05:29.871836 | orchestrator | 2025-08-29 15:05:29.871844 | orchestrator | 2025-08-29 15:05:29.871854 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:05:29.871860 | orchestrator | Friday 29 August 2025 15:05:26 +0000 (0:00:11.253) 0:01:18.082 ********* 2025-08-29 15:05:29.871865 | orchestrator | =============================================================================== 2025-08-29 15:05:29.871871 | orchestrator | Create admin user ------------------------------------------------------ 42.62s 2025-08-29 15:05:29.871876 | orchestrator | Restart ceph manager service ------------------------------------------- 24.32s 2025-08-29 15:05:29.871881 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.06s 2025-08-29 15:05:29.871887 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.75s 2025-08-29 15:05:29.871892 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.40s 2025-08-29 15:05:29.871898 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.19s 2025-08-29 15:05:29.871903 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.18s 2025-08-29 15:05:29.871908 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.09s 2025-08-29 15:05:29.871914 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.01s 2025-08-29 15:05:29.871919 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.94s 2025-08-29 15:05:29.871925 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.19s 2025-08-29 15:05:29.871930 | orchestrator | 2025-08-29 15:05:29 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:29.873633 | orchestrator | 2025-08-29 15:05:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:32.910424 | orchestrator | 2025-08-29 15:05:32 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:32.911156 | orchestrator | 2025-08-29 15:05:32 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:32.912111 | orchestrator | 2025-08-29 15:05:32 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:32.913055 | orchestrator | 2025-08-29 15:05:32 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:32.913095 | orchestrator | 2025-08-29 15:05:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:35.957086 | orchestrator | 2025-08-29 15:05:35 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:35.958425 | orchestrator | 2025-08-29 15:05:35 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:35.959873 | orchestrator | 2025-08-29 15:05:35 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:35.961396 | orchestrator | 2025-08-29 15:05:35 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:35.961457 | orchestrator | 2025-08-29 15:05:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:38.993593 | orchestrator | 2025-08-29 15:05:38 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:38.994485 | orchestrator | 2025-08-29 15:05:38 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:38.996702 | orchestrator | 2025-08-29 15:05:38 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:38.997430 | orchestrator | 2025-08-29 15:05:38 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:38.997464 | orchestrator | 2025-08-29 15:05:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:42.040866 | orchestrator | 2025-08-29 15:05:42 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:42.042266 | orchestrator | 2025-08-29 15:05:42 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:42.046968 | orchestrator | 2025-08-29 15:05:42 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:42.048295 | orchestrator | 2025-08-29 15:05:42 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:42.048393 | orchestrator | 2025-08-29 15:05:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:45.088429 | orchestrator | 2025-08-29 15:05:45 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:45.090296 | orchestrator | 2025-08-29 15:05:45 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:45.091226 | orchestrator | 2025-08-29 15:05:45 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:45.092123 | orchestrator | 2025-08-29 15:05:45 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:45.092174 | orchestrator | 2025-08-29 15:05:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:48.130922 | orchestrator | 2025-08-29 15:05:48 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:48.131442 | orchestrator | 2025-08-29 15:05:48 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:48.132558 | orchestrator | 2025-08-29 15:05:48 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:48.133845 | orchestrator | 2025-08-29 15:05:48 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:48.133931 | orchestrator | 2025-08-29 15:05:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:51.167974 | orchestrator | 2025-08-29 15:05:51 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:51.169916 | orchestrator | 2025-08-29 15:05:51 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:51.170740 | orchestrator | 2025-08-29 15:05:51 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:51.171739 | orchestrator | 2025-08-29 15:05:51 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:51.171808 | orchestrator | 2025-08-29 15:05:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:54.206989 | orchestrator | 2025-08-29 15:05:54 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:54.209103 | orchestrator | 2025-08-29 15:05:54 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:54.209929 | orchestrator | 2025-08-29 15:05:54 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:54.211650 | orchestrator | 2025-08-29 15:05:54 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:54.211769 | orchestrator | 2025-08-29 15:05:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:57.285726 | orchestrator | 2025-08-29 15:05:57 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:05:57.287439 | orchestrator | 2025-08-29 15:05:57 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:05:57.288692 | orchestrator | 2025-08-29 15:05:57 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:05:57.289779 | orchestrator | 2025-08-29 15:05:57 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:05:57.289816 | orchestrator | 2025-08-29 15:05:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:00.332924 | orchestrator | 2025-08-29 15:06:00 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:06:00.334278 | orchestrator | 2025-08-29 15:06:00 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:00.339937 | orchestrator | 2025-08-29 15:06:00 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:00.343265 | orchestrator | 2025-08-29 15:06:00 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:00.343336 | orchestrator | 2025-08-29 15:06:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:03.390148 | orchestrator | 2025-08-29 15:06:03 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:06:03.390836 | orchestrator | 2025-08-29 15:06:03 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:03.391888 | orchestrator | 2025-08-29 15:06:03 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:03.393151 | orchestrator | 2025-08-29 15:06:03 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:03.393186 | orchestrator | 2025-08-29 15:06:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:06.431745 | orchestrator | 2025-08-29 15:06:06 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:06:06.432641 | orchestrator | 2025-08-29 15:06:06 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:06.434092 | orchestrator | 2025-08-29 15:06:06 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:06.436267 | orchestrator | 2025-08-29 15:06:06 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:06.436308 | orchestrator | 2025-08-29 15:06:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:09.474548 | orchestrator | 2025-08-29 15:06:09 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:06:09.475027 | orchestrator | 2025-08-29 15:06:09 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:09.477492 | orchestrator | 2025-08-29 15:06:09 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:09.477557 | orchestrator | 2025-08-29 15:06:09 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:09.477596 | orchestrator | 2025-08-29 15:06:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:12.515574 | orchestrator | 2025-08-29 15:06:12 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:06:12.516040 | orchestrator | 2025-08-29 15:06:12 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:12.516923 | orchestrator | 2025-08-29 15:06:12 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:12.518119 | orchestrator | 2025-08-29 15:06:12 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:12.518153 | orchestrator | 2025-08-29 15:06:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:15.558351 | orchestrator | 2025-08-29 15:06:15 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:06:15.560397 | orchestrator | 2025-08-29 15:06:15 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:15.561399 | orchestrator | 2025-08-29 15:06:15 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:15.564076 | orchestrator | 2025-08-29 15:06:15 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:15.564129 | orchestrator | 2025-08-29 15:06:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:18.613924 | orchestrator | 2025-08-29 15:06:18 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state STARTED 2025-08-29 15:06:18.617865 | orchestrator | 2025-08-29 15:06:18 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:18.621914 | orchestrator | 2025-08-29 15:06:18 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:18.625100 | orchestrator | 2025-08-29 15:06:18 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:18.625174 | orchestrator | 2025-08-29 15:06:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:21.662414 | orchestrator | 2025-08-29 15:06:21 | INFO  | Task ca6dd6d3-8b70-452b-937a-544bbad7a14c is in state SUCCESS 2025-08-29 15:06:21.664170 | orchestrator | 2025-08-29 15:06:21.664243 | orchestrator | 2025-08-29 15:06:21.664263 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:06:21.664278 | orchestrator | 2025-08-29 15:06:21.664293 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:06:21.664311 | orchestrator | Friday 29 August 2025 15:04:08 +0000 (0:00:00.326) 0:00:00.326 ********* 2025-08-29 15:06:21.664327 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:21.664388 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:21.664400 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:21.664410 | orchestrator | 2025-08-29 15:06:21.664602 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:06:21.664662 | orchestrator | Friday 29 August 2025 15:04:08 +0000 (0:00:00.452) 0:00:00.779 ********* 2025-08-29 15:06:21.664674 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-08-29 15:06:21.664685 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-08-29 15:06:21.664694 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-08-29 15:06:21.664704 | orchestrator | 2025-08-29 15:06:21.664714 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-08-29 15:06:21.664723 | orchestrator | 2025-08-29 15:06:21.664733 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 15:06:21.664744 | orchestrator | Friday 29 August 2025 15:04:09 +0000 (0:00:00.911) 0:00:01.690 ********* 2025-08-29 15:06:21.664756 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:06:21.664789 | orchestrator | 2025-08-29 15:06:21.664801 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-08-29 15:06:21.664812 | orchestrator | Friday 29 August 2025 15:04:10 +0000 (0:00:00.649) 0:00:02.340 ********* 2025-08-29 15:06:21.664823 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-08-29 15:06:21.664835 | orchestrator | 2025-08-29 15:06:21.664845 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-08-29 15:06:21.664860 | orchestrator | Friday 29 August 2025 15:04:13 +0000 (0:00:03.447) 0:00:05.788 ********* 2025-08-29 15:06:21.664876 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-08-29 15:06:21.664902 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-08-29 15:06:21.664921 | orchestrator | 2025-08-29 15:06:21.664935 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-08-29 15:06:21.664950 | orchestrator | Friday 29 August 2025 15:04:21 +0000 (0:00:07.487) 0:00:13.275 ********* 2025-08-29 15:06:21.664964 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:06:21.664979 | orchestrator | 2025-08-29 15:06:21.664994 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-08-29 15:06:21.665009 | orchestrator | Friday 29 August 2025 15:04:24 +0000 (0:00:03.025) 0:00:16.301 ********* 2025-08-29 15:06:21.665023 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:06:21.665037 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-08-29 15:06:21.665053 | orchestrator | 2025-08-29 15:06:21.665067 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-08-29 15:06:21.665081 | orchestrator | Friday 29 August 2025 15:04:27 +0000 (0:00:03.451) 0:00:19.752 ********* 2025-08-29 15:06:21.665098 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:06:21.665115 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-08-29 15:06:21.665132 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-08-29 15:06:21.665147 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-08-29 15:06:21.665165 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-08-29 15:06:21.665176 | orchestrator | 2025-08-29 15:06:21.665185 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-08-29 15:06:21.665195 | orchestrator | Friday 29 August 2025 15:04:41 +0000 (0:00:14.170) 0:00:33.923 ********* 2025-08-29 15:06:21.665205 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-08-29 15:06:21.665214 | orchestrator | 2025-08-29 15:06:21.665224 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-08-29 15:06:21.665234 | orchestrator | Friday 29 August 2025 15:04:45 +0000 (0:00:03.600) 0:00:37.524 ********* 2025-08-29 15:06:21.665247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.665291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.665315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.665327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.665339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.665350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.665369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.665394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.665405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.665415 | orchestrator | 2025-08-29 15:06:21.665425 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-08-29 15:06:21.665435 | orchestrator | Friday 29 August 2025 15:04:47 +0000 (0:00:02.367) 0:00:39.891 ********* 2025-08-29 15:06:21.665445 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-08-29 15:06:21.665551 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-08-29 15:06:21.665569 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-08-29 15:06:21.665584 | orchestrator | 2025-08-29 15:06:21.665599 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-08-29 15:06:21.665616 | orchestrator | Friday 29 August 2025 15:04:49 +0000 (0:00:01.604) 0:00:41.496 ********* 2025-08-29 15:06:21.665633 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:21.665650 | orchestrator | 2025-08-29 15:06:21.665666 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-08-29 15:06:21.665676 | orchestrator | Friday 29 August 2025 15:04:49 +0000 (0:00:00.186) 0:00:41.683 ********* 2025-08-29 15:06:21.665686 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:21.665695 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:21.665705 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:21.665715 | orchestrator | 2025-08-29 15:06:21.665724 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 15:06:21.665734 | orchestrator | Friday 29 August 2025 15:04:51 +0000 (0:00:01.771) 0:00:43.454 ********* 2025-08-29 15:06:21.665744 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:06:21.665754 | orchestrator | 2025-08-29 15:06:21.665763 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-08-29 15:06:21.665773 | orchestrator | Friday 29 August 2025 15:04:53 +0000 (0:00:01.826) 0:00:45.281 ********* 2025-08-29 15:06:21.665784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.665822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.665833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.665844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.665854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.665865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.665881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.665905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.665938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.665957 | orchestrator | 2025-08-29 15:06:21.665972 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-08-29 15:06:21.665988 | orchestrator | Friday 29 August 2025 15:04:56 +0000 (0:00:03.425) 0:00:48.707 ********* 2025-08-29 15:06:21.666003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:06:21.666090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.666115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.666145 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:21.666176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:06:21.666193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.666204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.666214 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:21.666224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:06:21.666234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.666252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.666263 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:21.666272 | orchestrator | 2025-08-29 15:06:21.666282 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-08-29 15:06:21.666292 | orchestrator | Friday 29 August 2025 15:04:58 +0000 (0:00:02.163) 0:00:50.871 ********* 2025-08-29 15:06:21.666323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:06:21.666334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.666344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.666354 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:21.666365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:06:21.666380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.666398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:06:21.666413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.666424 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:21.666434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.666444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.666544 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:21.666563 | orchestrator | 2025-08-29 15:06:21.666578 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-08-29 15:06:21.666594 | orchestrator | Friday 29 August 2025 15:05:01 +0000 (0:00:02.471) 0:00:53.342 ********* 2025-08-29 15:06:21.666612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.666639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.666663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.666674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.666684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.666702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.666712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.666729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.666745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.666755 | orchestrator | 2025-08-29 15:06:21.666765 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-08-29 15:06:21.666775 | orchestrator | Friday 29 August 2025 15:05:05 +0000 (0:00:04.521) 0:00:57.864 ********* 2025-08-29 15:06:21.666785 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:06:21.666795 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:06:21.666804 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:06:21.666814 | orchestrator | 2025-08-29 15:06:21.666824 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-08-29 15:06:21.666833 | orchestrator | Friday 29 August 2025 15:05:08 +0000 (0:00:02.679) 0:01:00.545 ********* 2025-08-29 15:06:21.666843 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:06:21.666853 | orchestrator | 2025-08-29 15:06:21.666862 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-08-29 15:06:21.666872 | orchestrator | Friday 29 August 2025 15:05:11 +0000 (0:00:02.537) 0:01:03.082 ********* 2025-08-29 15:06:21.666882 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:21.666892 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:21.666908 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:21.666917 | orchestrator | 2025-08-29 15:06:21.666927 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-08-29 15:06:21.666937 | orchestrator | Friday 29 August 2025 15:05:11 +0000 (0:00:00.968) 0:01:04.050 ********* 2025-08-29 15:06:21.666947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.666957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.666974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.666989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.667000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.667016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.667026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.667037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.667047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.667057 | orchestrator | 2025-08-29 15:06:21.667067 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-08-29 15:06:21.667077 | orchestrator | Friday 29 August 2025 15:05:24 +0000 (0:00:12.214) 0:01:16.265 ********* 2025-08-29 15:06:21.667100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:06:21.667118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:06:21.667128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.667138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.667149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.667159 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:21.667176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.667186 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:21.667201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:06:21.667231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.667241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:06:21.667252 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:21.667261 | orchestrator | 2025-08-29 15:06:21.667271 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-08-29 15:06:21.667281 | orchestrator | Friday 29 August 2025 15:05:26 +0000 (0:00:02.027) 0:01:18.293 ********* 2025-08-29 15:06:21.667292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.667315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.667337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:06:21.667347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.667358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.667368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.667378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.667404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.667421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:06:21.667432 | orchestrator | 2025-08-29 15:06:21.667441 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 15:06:21.667480 | orchestrator | Friday 29 August 2025 15:05:30 +0000 (0:00:04.348) 0:01:22.641 ********* 2025-08-29 15:06:21.667499 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:21.667516 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:21.667532 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:21.667547 | orchestrator | 2025-08-29 15:06:21.667564 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-08-29 15:06:21.667581 | orchestrator | Friday 29 August 2025 15:05:31 +0000 (0:00:00.966) 0:01:23.608 ********* 2025-08-29 15:06:21.667598 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:06:21.667614 | orchestrator | 2025-08-29 15:06:21.667624 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-08-29 15:06:21.667634 | orchestrator | Friday 29 August 2025 15:05:33 +0000 (0:00:02.346) 0:01:25.954 ********* 2025-08-29 15:06:21.667644 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:06:21.667653 | orchestrator | 2025-08-29 15:06:21.667664 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-08-29 15:06:21.667680 | orchestrator | Friday 29 August 2025 15:05:36 +0000 (0:00:02.212) 0:01:28.166 ********* 2025-08-29 15:06:21.667694 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:06:21.667713 | orchestrator | 2025-08-29 15:06:21.667736 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 15:06:21.667752 | orchestrator | Friday 29 August 2025 15:05:48 +0000 (0:00:12.137) 0:01:40.304 ********* 2025-08-29 15:06:21.667769 | orchestrator | 2025-08-29 15:06:21.667784 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 15:06:21.667799 | orchestrator | Friday 29 August 2025 15:05:48 +0000 (0:00:00.164) 0:01:40.468 ********* 2025-08-29 15:06:21.667813 | orchestrator | 2025-08-29 15:06:21.667828 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 15:06:21.667844 | orchestrator | Friday 29 August 2025 15:05:48 +0000 (0:00:00.144) 0:01:40.613 ********* 2025-08-29 15:06:21.667860 | orchestrator | 2025-08-29 15:06:21.667875 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-08-29 15:06:21.667891 | orchestrator | Friday 29 August 2025 15:05:48 +0000 (0:00:00.068) 0:01:40.681 ********* 2025-08-29 15:06:21.667907 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:06:21.667923 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:06:21.667939 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:06:21.667956 | orchestrator | 2025-08-29 15:06:21.667973 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-08-29 15:06:21.667988 | orchestrator | Friday 29 August 2025 15:06:00 +0000 (0:00:11.847) 0:01:52.529 ********* 2025-08-29 15:06:21.668004 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:06:21.668021 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:06:21.668037 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:06:21.668054 | orchestrator | 2025-08-29 15:06:21.668070 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-08-29 15:06:21.668087 | orchestrator | Friday 29 August 2025 15:06:09 +0000 (0:00:08.830) 0:02:01.359 ********* 2025-08-29 15:06:21.668109 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:06:21.668119 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:06:21.668128 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:06:21.668138 | orchestrator | 2025-08-29 15:06:21.668147 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:06:21.668159 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:06:21.668171 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:06:21.668181 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:06:21.668190 | orchestrator | 2025-08-29 15:06:21.668200 | orchestrator | 2025-08-29 15:06:21.668209 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:06:21.668219 | orchestrator | Friday 29 August 2025 15:06:21 +0000 (0:00:11.775) 0:02:13.134 ********* 2025-08-29 15:06:21.668228 | orchestrator | =============================================================================== 2025-08-29 15:06:21.668238 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.17s 2025-08-29 15:06:21.668259 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 12.21s 2025-08-29 15:06:21.668269 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.14s 2025-08-29 15:06:21.668279 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.85s 2025-08-29 15:06:21.668288 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.78s 2025-08-29 15:06:21.668298 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.83s 2025-08-29 15:06:21.668314 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.49s 2025-08-29 15:06:21.668324 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.52s 2025-08-29 15:06:21.668333 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.35s 2025-08-29 15:06:21.668343 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.60s 2025-08-29 15:06:21.668352 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.45s 2025-08-29 15:06:21.668362 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.45s 2025-08-29 15:06:21.668372 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.43s 2025-08-29 15:06:21.668381 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.03s 2025-08-29 15:06:21.668390 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.68s 2025-08-29 15:06:21.668400 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.54s 2025-08-29 15:06:21.668409 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.47s 2025-08-29 15:06:21.668419 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.37s 2025-08-29 15:06:21.668428 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.35s 2025-08-29 15:06:21.668437 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.21s 2025-08-29 15:06:21.668447 | orchestrator | 2025-08-29 15:06:21 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:21.668492 | orchestrator | 2025-08-29 15:06:21 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:21.668509 | orchestrator | 2025-08-29 15:06:21 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:21.668536 | orchestrator | 2025-08-29 15:06:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:24.713647 | orchestrator | 2025-08-29 15:06:24 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:06:24.714645 | orchestrator | 2025-08-29 15:06:24 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:24.715613 | orchestrator | 2025-08-29 15:06:24 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:24.716730 | orchestrator | 2025-08-29 15:06:24 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:24.716783 | orchestrator | 2025-08-29 15:06:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:27.751858 | orchestrator | 2025-08-29 15:06:27 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:06:27.751993 | orchestrator | 2025-08-29 15:06:27 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:27.753246 | orchestrator | 2025-08-29 15:06:27 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:27.753998 | orchestrator | 2025-08-29 15:06:27 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:27.754107 | orchestrator | 2025-08-29 15:06:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:30.805623 | orchestrator | 2025-08-29 15:06:30 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:06:30.806335 | orchestrator | 2025-08-29 15:06:30 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:30.809624 | orchestrator | 2025-08-29 15:06:30 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:30.810114 | orchestrator | 2025-08-29 15:06:30 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:30.810183 | orchestrator | 2025-08-29 15:06:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:33.839873 | orchestrator | 2025-08-29 15:06:33 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:06:33.840294 | orchestrator | 2025-08-29 15:06:33 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:33.840996 | orchestrator | 2025-08-29 15:06:33 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:33.842150 | orchestrator | 2025-08-29 15:06:33 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:33.842248 | orchestrator | 2025-08-29 15:06:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:36.872937 | orchestrator | 2025-08-29 15:06:36 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:06:36.873052 | orchestrator | 2025-08-29 15:06:36 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:36.873646 | orchestrator | 2025-08-29 15:06:36 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:36.874578 | orchestrator | 2025-08-29 15:06:36 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:36.874619 | orchestrator | 2025-08-29 15:06:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:39.924302 | orchestrator | 2025-08-29 15:06:39 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:06:39.924402 | orchestrator | 2025-08-29 15:06:39 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:39.924418 | orchestrator | 2025-08-29 15:06:39 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:39.924430 | orchestrator | 2025-08-29 15:06:39 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:39.924545 | orchestrator | 2025-08-29 15:06:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:42.960732 | orchestrator | 2025-08-29 15:06:42 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:06:42.963491 | orchestrator | 2025-08-29 15:06:42 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:42.965710 | orchestrator | 2025-08-29 15:06:42 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:42.967678 | orchestrator | 2025-08-29 15:06:42 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:42.967709 | orchestrator | 2025-08-29 15:06:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:46.013690 | orchestrator | 2025-08-29 15:06:46 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:06:46.041904 | orchestrator | 2025-08-29 15:06:46 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:46.044788 | orchestrator | 2025-08-29 15:06:46 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:46.045888 | orchestrator | 2025-08-29 15:06:46 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:46.045942 | orchestrator | 2025-08-29 15:06:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:49.104417 | orchestrator | 2025-08-29 15:06:49 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:06:49.106569 | orchestrator | 2025-08-29 15:06:49 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:49.108001 | orchestrator | 2025-08-29 15:06:49 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:49.109266 | orchestrator | 2025-08-29 15:06:49 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:49.109315 | orchestrator | 2025-08-29 15:06:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:52.147722 | orchestrator | 2025-08-29 15:06:52 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:06:52.149616 | orchestrator | 2025-08-29 15:06:52 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:52.150325 | orchestrator | 2025-08-29 15:06:52 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:52.151467 | orchestrator | 2025-08-29 15:06:52 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:52.151506 | orchestrator | 2025-08-29 15:06:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:55.181389 | orchestrator | 2025-08-29 15:06:55 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:06:55.183920 | orchestrator | 2025-08-29 15:06:55 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:55.186127 | orchestrator | 2025-08-29 15:06:55 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:55.188241 | orchestrator | 2025-08-29 15:06:55 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:55.188310 | orchestrator | 2025-08-29 15:06:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:58.216626 | orchestrator | 2025-08-29 15:06:58 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:06:58.218556 | orchestrator | 2025-08-29 15:06:58 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:06:58.221041 | orchestrator | 2025-08-29 15:06:58 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:06:58.222893 | orchestrator | 2025-08-29 15:06:58 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:06:58.222913 | orchestrator | 2025-08-29 15:06:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:01.261359 | orchestrator | 2025-08-29 15:07:01 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:07:01.264461 | orchestrator | 2025-08-29 15:07:01 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:07:01.264522 | orchestrator | 2025-08-29 15:07:01 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:01.266154 | orchestrator | 2025-08-29 15:07:01 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:01.266310 | orchestrator | 2025-08-29 15:07:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:04.308575 | orchestrator | 2025-08-29 15:07:04 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:07:04.309118 | orchestrator | 2025-08-29 15:07:04 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:07:04.310307 | orchestrator | 2025-08-29 15:07:04 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:04.311257 | orchestrator | 2025-08-29 15:07:04 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:04.311294 | orchestrator | 2025-08-29 15:07:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:07.359958 | orchestrator | 2025-08-29 15:07:07 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:07:07.360408 | orchestrator | 2025-08-29 15:07:07 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:07:07.361638 | orchestrator | 2025-08-29 15:07:07 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:07.362636 | orchestrator | 2025-08-29 15:07:07 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:07.362676 | orchestrator | 2025-08-29 15:07:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:10.403909 | orchestrator | 2025-08-29 15:07:10 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:07:10.405893 | orchestrator | 2025-08-29 15:07:10 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:07:10.407715 | orchestrator | 2025-08-29 15:07:10 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:10.409667 | orchestrator | 2025-08-29 15:07:10 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:10.409725 | orchestrator | 2025-08-29 15:07:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:13.452741 | orchestrator | 2025-08-29 15:07:13 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:07:13.454247 | orchestrator | 2025-08-29 15:07:13 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:07:13.454564 | orchestrator | 2025-08-29 15:07:13 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:13.456320 | orchestrator | 2025-08-29 15:07:13 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:13.456389 | orchestrator | 2025-08-29 15:07:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:16.507050 | orchestrator | 2025-08-29 15:07:16 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:07:16.508745 | orchestrator | 2025-08-29 15:07:16 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:07:16.510681 | orchestrator | 2025-08-29 15:07:16 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:16.512154 | orchestrator | 2025-08-29 15:07:16 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:16.512358 | orchestrator | 2025-08-29 15:07:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:19.549207 | orchestrator | 2025-08-29 15:07:19 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:07:19.549663 | orchestrator | 2025-08-29 15:07:19 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:07:19.550928 | orchestrator | 2025-08-29 15:07:19 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:19.551595 | orchestrator | 2025-08-29 15:07:19 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:19.551612 | orchestrator | 2025-08-29 15:07:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:22.584965 | orchestrator | 2025-08-29 15:07:22 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:07:22.587389 | orchestrator | 2025-08-29 15:07:22 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:07:22.588847 | orchestrator | 2025-08-29 15:07:22 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:22.590237 | orchestrator | 2025-08-29 15:07:22 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:22.590267 | orchestrator | 2025-08-29 15:07:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:25.628298 | orchestrator | 2025-08-29 15:07:25 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:07:25.629883 | orchestrator | 2025-08-29 15:07:25 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state STARTED 2025-08-29 15:07:25.631172 | orchestrator | 2025-08-29 15:07:25 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:25.631796 | orchestrator | 2025-08-29 15:07:25 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:25.632108 | orchestrator | 2025-08-29 15:07:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:28.686918 | orchestrator | 2025-08-29 15:07:28 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:07:28.689192 | orchestrator | 2025-08-29 15:07:28 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:07:28.698710 | orchestrator | 2025-08-29 15:07:28 | INFO  | Task b9161ca7-2dac-43cb-ae72-a2173b6965c9 is in state SUCCESS 2025-08-29 15:07:28.700255 | orchestrator | 2025-08-29 15:07:28.700344 | orchestrator | 2025-08-29 15:07:28.700370 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:07:28.700391 | orchestrator | 2025-08-29 15:07:28.700488 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:07:28.700573 | orchestrator | Friday 29 August 2025 15:04:08 +0000 (0:00:00.299) 0:00:00.299 ********* 2025-08-29 15:07:28.700595 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:07:28.700615 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:07:28.700634 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:07:28.700700 | orchestrator | 2025-08-29 15:07:28.700978 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:07:28.701009 | orchestrator | Friday 29 August 2025 15:04:08 +0000 (0:00:00.378) 0:00:00.677 ********* 2025-08-29 15:07:28.701061 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-08-29 15:07:28.701085 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-08-29 15:07:28.701106 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-08-29 15:07:28.701126 | orchestrator | 2025-08-29 15:07:28.701148 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-08-29 15:07:28.701170 | orchestrator | 2025-08-29 15:07:28.701190 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:07:28.701212 | orchestrator | Friday 29 August 2025 15:04:09 +0000 (0:00:00.570) 0:00:01.248 ********* 2025-08-29 15:07:28.701234 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:07:28.701255 | orchestrator | 2025-08-29 15:07:28.701271 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-08-29 15:07:28.701333 | orchestrator | Friday 29 August 2025 15:04:10 +0000 (0:00:01.016) 0:00:02.265 ********* 2025-08-29 15:07:28.701357 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-08-29 15:07:28.701378 | orchestrator | 2025-08-29 15:07:28.701420 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-08-29 15:07:28.701440 | orchestrator | Friday 29 August 2025 15:04:14 +0000 (0:00:04.106) 0:00:06.372 ********* 2025-08-29 15:07:28.701460 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-08-29 15:07:28.701479 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-08-29 15:07:28.701497 | orchestrator | 2025-08-29 15:07:28.701508 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-08-29 15:07:28.701656 | orchestrator | Friday 29 August 2025 15:04:20 +0000 (0:00:06.501) 0:00:12.874 ********* 2025-08-29 15:07:28.701682 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-08-29 15:07:28.701704 | orchestrator | 2025-08-29 15:07:28.701761 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-08-29 15:07:28.701785 | orchestrator | Friday 29 August 2025 15:04:23 +0000 (0:00:03.137) 0:00:16.012 ********* 2025-08-29 15:07:28.701798 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:07:28.701809 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-08-29 15:07:28.701820 | orchestrator | 2025-08-29 15:07:28.701831 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-08-29 15:07:28.701842 | orchestrator | Friday 29 August 2025 15:04:27 +0000 (0:00:03.474) 0:00:19.486 ********* 2025-08-29 15:07:28.701868 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:07:28.701879 | orchestrator | 2025-08-29 15:07:28.701891 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-08-29 15:07:28.701901 | orchestrator | Friday 29 August 2025 15:04:30 +0000 (0:00:02.968) 0:00:22.455 ********* 2025-08-29 15:07:28.701912 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-08-29 15:07:28.701923 | orchestrator | 2025-08-29 15:07:28.701934 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-08-29 15:07:28.701945 | orchestrator | Friday 29 August 2025 15:04:34 +0000 (0:00:03.959) 0:00:26.415 ********* 2025-08-29 15:07:28.701959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.702009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.702101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.702125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702432 | orchestrator | 2025-08-29 15:07:28.702444 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-08-29 15:07:28.702455 | orchestrator | Friday 29 August 2025 15:04:37 +0000 (0:00:02.831) 0:00:29.246 ********* 2025-08-29 15:07:28.702466 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:28.702477 | orchestrator | 2025-08-29 15:07:28.702493 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-08-29 15:07:28.702504 | orchestrator | Friday 29 August 2025 15:04:37 +0000 (0:00:00.133) 0:00:29.379 ********* 2025-08-29 15:07:28.702515 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:28.702526 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:28.702536 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:28.702547 | orchestrator | 2025-08-29 15:07:28.702569 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:07:28.702580 | orchestrator | Friday 29 August 2025 15:04:37 +0000 (0:00:00.289) 0:00:29.669 ********* 2025-08-29 15:07:28.702591 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:07:28.702602 | orchestrator | 2025-08-29 15:07:28.702613 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-08-29 15:07:28.702624 | orchestrator | Friday 29 August 2025 15:04:38 +0000 (0:00:00.797) 0:00:30.466 ********* 2025-08-29 15:07:28.702635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.702655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.702667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.702679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.702912 | orchestrator | 2025-08-29 15:07:28.702923 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-08-29 15:07:28.702934 | orchestrator | Friday 29 August 2025 15:04:44 +0000 (0:00:05.868) 0:00:36.335 ********* 2025-08-29 15:07:28.702946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.702958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:07:28.702976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.703081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:07:28.703093 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:28.703111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703237 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:28.703253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.703265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:07:28.703285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703339 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:28.703350 | orchestrator | 2025-08-29 15:07:28.703361 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-08-29 15:07:28.703372 | orchestrator | Friday 29 August 2025 15:04:44 +0000 (0:00:00.849) 0:00:37.184 ********* 2025-08-29 15:07:28.703387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.703475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:07:28.703496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703600 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:28.703627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.703648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:07:28.703678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703769 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:28.703795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.703807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:07:28.703819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.703880 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:28.703891 | orchestrator | 2025-08-29 15:07:28.703902 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-08-29 15:07:28.703913 | orchestrator | Friday 29 August 2025 15:04:47 +0000 (0:00:02.850) 0:00:40.034 ********* 2025-08-29 15:07:28.703929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.703942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.703960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.703978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.703990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704169 | orchestrator | 2025-08-29 15:07:28.704178 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-08-29 15:07:28.704188 | orchestrator | Friday 29 August 2025 15:04:55 +0000 (0:00:07.435) 0:00:47.470 ********* 2025-08-29 15:07:28.704198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.704212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.704223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.704245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704458 | orchestrator | 2025-08-29 15:07:28.704468 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-08-29 15:07:28.704478 | orchestrator | Friday 29 August 2025 15:05:19 +0000 (0:00:23.813) 0:01:11.283 ********* 2025-08-29 15:07:28.704488 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 15:07:28.704497 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 15:07:28.704507 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 15:07:28.704517 | orchestrator | 2025-08-29 15:07:28.704526 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-08-29 15:07:28.704536 | orchestrator | Friday 29 August 2025 15:05:29 +0000 (0:00:10.123) 0:01:21.406 ********* 2025-08-29 15:07:28.704546 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 15:07:28.704555 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 15:07:28.704565 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 15:07:28.704574 | orchestrator | 2025-08-29 15:07:28.704584 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-08-29 15:07:28.704594 | orchestrator | Friday 29 August 2025 15:05:33 +0000 (0:00:04.206) 0:01:25.613 ********* 2025-08-29 15:07:28.704609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.704620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.704642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.704653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.704674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.704688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.704698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.704732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.704743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.704753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.704778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.704794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.704804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.704819 | orchestrator | changed2025-08-29 15:07:28 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:28.704988 | orchestrator | : [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705016 | orchestrator | 2025-08-29 15:07:28.705026 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-08-29 15:07:28.705035 | orchestrator | Friday 29 August 2025 15:05:37 +0000 (0:00:04.561) 0:01:30.174 ********* 2025-08-29 15:07:28.705046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.705071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.705082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.705098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705289 | orchestrator | 2025-08-29 15:07:28.705300 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:07:28.705310 | orchestrator | Friday 29 August 2025 15:05:40 +0000 (0:00:02.928) 0:01:33.102 ********* 2025-08-29 15:07:28.705320 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:28.705330 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:28.705340 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:28.705349 | orchestrator | 2025-08-29 15:07:28.705359 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-08-29 15:07:28.705369 | orchestrator | Friday 29 August 2025 15:05:41 +0000 (0:00:00.389) 0:01:33.492 ********* 2025-08-29 15:07:28.705379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.705424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:07:28.705443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.705491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:07:28.705522 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:28.705533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705584 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:28.705602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:07:28.705635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:07:28.705654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:07:28.705708 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:28.705720 | orchestrator | 2025-08-29 15:07:28.705731 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-08-29 15:07:28.705741 | orchestrator | Friday 29 August 2025 15:05:44 +0000 (0:00:02.815) 0:01:36.307 ********* 2025-08-29 15:07:28.705759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.705774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.705787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:07:28.705799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.705985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.706000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:07:28.706010 | orchestrator | 2025-08-29 15:07:28.706066 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:07:28.706083 | orchestrator | Friday 29 August 2025 15:05:49 +0000 (0:00:05.075) 0:01:41.382 ********* 2025-08-29 15:07:28.706093 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:28.706103 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:28.706113 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:28.706123 | orchestrator | 2025-08-29 15:07:28.706133 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-08-29 15:07:28.706143 | orchestrator | Friday 29 August 2025 15:05:49 +0000 (0:00:00.400) 0:01:41.783 ********* 2025-08-29 15:07:28.706153 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-08-29 15:07:28.706163 | orchestrator | 2025-08-29 15:07:28.706172 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-08-29 15:07:28.706182 | orchestrator | Friday 29 August 2025 15:05:51 +0000 (0:00:01.923) 0:01:43.706 ********* 2025-08-29 15:07:28.706192 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:07:28.706202 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-08-29 15:07:28.706212 | orchestrator | 2025-08-29 15:07:28.706222 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-08-29 15:07:28.706231 | orchestrator | Friday 29 August 2025 15:05:54 +0000 (0:00:02.575) 0:01:46.282 ********* 2025-08-29 15:07:28.706241 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:28.706251 | orchestrator | 2025-08-29 15:07:28.706261 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 15:07:28.706271 | orchestrator | Friday 29 August 2025 15:06:10 +0000 (0:00:16.280) 0:02:02.563 ********* 2025-08-29 15:07:28.706281 | orchestrator | 2025-08-29 15:07:28.706290 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 15:07:28.706300 | orchestrator | Friday 29 August 2025 15:06:10 +0000 (0:00:00.623) 0:02:03.187 ********* 2025-08-29 15:07:28.706311 | orchestrator | 2025-08-29 15:07:28.706320 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 15:07:28.706330 | orchestrator | Friday 29 August 2025 15:06:11 +0000 (0:00:00.072) 0:02:03.260 ********* 2025-08-29 15:07:28.706340 | orchestrator | 2025-08-29 15:07:28.706349 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-08-29 15:07:28.706359 | orchestrator | Friday 29 August 2025 15:06:11 +0000 (0:00:00.065) 0:02:03.325 ********* 2025-08-29 15:07:28.706369 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:28.706378 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:07:28.706388 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:07:28.706417 | orchestrator | 2025-08-29 15:07:28.706428 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-08-29 15:07:28.706438 | orchestrator | Friday 29 August 2025 15:06:22 +0000 (0:00:11.788) 0:02:15.113 ********* 2025-08-29 15:07:28.706448 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:07:28.706457 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:07:28.706467 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:28.706476 | orchestrator | 2025-08-29 15:07:28.706491 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-08-29 15:07:28.706501 | orchestrator | Friday 29 August 2025 15:06:34 +0000 (0:00:11.784) 0:02:26.898 ********* 2025-08-29 15:07:28.706511 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:28.706520 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:07:28.706530 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:07:28.706539 | orchestrator | 2025-08-29 15:07:28.706549 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-08-29 15:07:28.706561 | orchestrator | Friday 29 August 2025 15:06:48 +0000 (0:00:13.365) 0:02:40.264 ********* 2025-08-29 15:07:28.706578 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:07:28.706602 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:28.706620 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:07:28.706636 | orchestrator | 2025-08-29 15:07:28.706652 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-08-29 15:07:28.706677 | orchestrator | Friday 29 August 2025 15:07:01 +0000 (0:00:13.159) 0:02:53.424 ********* 2025-08-29 15:07:28.706692 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:07:28.706708 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:28.706724 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:07:28.706741 | orchestrator | 2025-08-29 15:07:28.706758 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-08-29 15:07:28.706775 | orchestrator | Friday 29 August 2025 15:07:12 +0000 (0:00:11.688) 0:03:05.112 ********* 2025-08-29 15:07:28.706791 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:28.706801 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:07:28.706811 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:07:28.706820 | orchestrator | 2025-08-29 15:07:28.706830 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-08-29 15:07:28.706840 | orchestrator | Friday 29 August 2025 15:07:19 +0000 (0:00:06.310) 0:03:11.423 ********* 2025-08-29 15:07:28.706849 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:28.706859 | orchestrator | 2025-08-29 15:07:28.706869 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:07:28.706878 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:07:28.706889 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:07:28.706899 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:07:28.706909 | orchestrator | 2025-08-29 15:07:28.706918 | orchestrator | 2025-08-29 15:07:28.706938 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:07:28.706948 | orchestrator | Friday 29 August 2025 15:07:26 +0000 (0:00:06.984) 0:03:18.407 ********* 2025-08-29 15:07:28.706958 | orchestrator | =============================================================================== 2025-08-29 15:07:28.706968 | orchestrator | designate : Copying over designate.conf -------------------------------- 23.81s 2025-08-29 15:07:28.706977 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.28s 2025-08-29 15:07:28.706987 | orchestrator | designate : Restart designate-central container ------------------------ 13.37s 2025-08-29 15:07:28.706997 | orchestrator | designate : Restart designate-producer container ----------------------- 13.16s 2025-08-29 15:07:28.707006 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 11.79s 2025-08-29 15:07:28.707016 | orchestrator | designate : Restart designate-api container ---------------------------- 11.78s 2025-08-29 15:07:28.707026 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.69s 2025-08-29 15:07:28.707035 | orchestrator | designate : Copying over pools.yaml ------------------------------------ 10.12s 2025-08-29 15:07:28.707045 | orchestrator | designate : Copying over config.json files for services ----------------- 7.44s 2025-08-29 15:07:28.707055 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.98s 2025-08-29 15:07:28.707064 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.50s 2025-08-29 15:07:28.707074 | orchestrator | designate : Restart designate-worker container -------------------------- 6.31s 2025-08-29 15:07:28.707084 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.87s 2025-08-29 15:07:28.707094 | orchestrator | designate : Check designate containers ---------------------------------- 5.08s 2025-08-29 15:07:28.707103 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.56s 2025-08-29 15:07:28.707113 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.21s 2025-08-29 15:07:28.707123 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.11s 2025-08-29 15:07:28.707133 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.96s 2025-08-29 15:07:28.707149 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.47s 2025-08-29 15:07:28.707159 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.14s 2025-08-29 15:07:28.707168 | orchestrator | 2025-08-29 15:07:28 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:28.707178 | orchestrator | 2025-08-29 15:07:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:31.755030 | orchestrator | 2025-08-29 15:07:31 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:07:31.756848 | orchestrator | 2025-08-29 15:07:31 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:07:31.758572 | orchestrator | 2025-08-29 15:07:31 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:31.760792 | orchestrator | 2025-08-29 15:07:31 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:31.760826 | orchestrator | 2025-08-29 15:07:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:34.802715 | orchestrator | 2025-08-29 15:07:34 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:07:34.803914 | orchestrator | 2025-08-29 15:07:34 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:07:34.805764 | orchestrator | 2025-08-29 15:07:34 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:34.807051 | orchestrator | 2025-08-29 15:07:34 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:34.807544 | orchestrator | 2025-08-29 15:07:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:37.851329 | orchestrator | 2025-08-29 15:07:37 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state STARTED 2025-08-29 15:07:37.852179 | orchestrator | 2025-08-29 15:07:37 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:07:37.853239 | orchestrator | 2025-08-29 15:07:37 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:37.854504 | orchestrator | 2025-08-29 15:07:37 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:37.854531 | orchestrator | 2025-08-29 15:07:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:40.897034 | orchestrator | 2025-08-29 15:07:40 | INFO  | Task fd212a7f-8578-4f55-b6a5-d94f006cfc21 is in state SUCCESS 2025-08-29 15:07:40.898472 | orchestrator | 2025-08-29 15:07:40.898543 | orchestrator | 2025-08-29 15:07:40.898557 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:07:40.898569 | orchestrator | 2025-08-29 15:07:40.898579 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:07:40.898590 | orchestrator | Friday 29 August 2025 15:06:30 +0000 (0:00:00.750) 0:00:00.750 ********* 2025-08-29 15:07:40.898600 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:07:40.898612 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:07:40.898622 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:07:40.898632 | orchestrator | 2025-08-29 15:07:40.898642 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:07:40.898652 | orchestrator | Friday 29 August 2025 15:06:30 +0000 (0:00:00.367) 0:00:01.117 ********* 2025-08-29 15:07:40.898662 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-08-29 15:07:40.898672 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-08-29 15:07:40.898682 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-08-29 15:07:40.898692 | orchestrator | 2025-08-29 15:07:40.898702 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-08-29 15:07:40.898735 | orchestrator | 2025-08-29 15:07:40.898746 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 15:07:40.898756 | orchestrator | Friday 29 August 2025 15:06:31 +0000 (0:00:00.513) 0:00:01.631 ********* 2025-08-29 15:07:40.898765 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:07:40.898776 | orchestrator | 2025-08-29 15:07:40.898786 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-08-29 15:07:40.898796 | orchestrator | Friday 29 August 2025 15:06:31 +0000 (0:00:00.666) 0:00:02.297 ********* 2025-08-29 15:07:40.898805 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-08-29 15:07:40.898815 | orchestrator | 2025-08-29 15:07:40.898830 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-08-29 15:07:40.898846 | orchestrator | Friday 29 August 2025 15:06:35 +0000 (0:00:03.743) 0:00:06.041 ********* 2025-08-29 15:07:40.898860 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-08-29 15:07:40.898876 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-08-29 15:07:40.898892 | orchestrator | 2025-08-29 15:07:40.898908 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-08-29 15:07:40.898925 | orchestrator | Friday 29 August 2025 15:06:42 +0000 (0:00:06.522) 0:00:12.563 ********* 2025-08-29 15:07:40.898942 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:07:40.898958 | orchestrator | 2025-08-29 15:07:40.898973 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-08-29 15:07:40.898983 | orchestrator | Friday 29 August 2025 15:06:45 +0000 (0:00:03.123) 0:00:15.687 ********* 2025-08-29 15:07:40.898992 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:07:40.899002 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-08-29 15:07:40.899011 | orchestrator | 2025-08-29 15:07:40.899020 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-08-29 15:07:40.899029 | orchestrator | Friday 29 August 2025 15:06:49 +0000 (0:00:04.068) 0:00:19.755 ********* 2025-08-29 15:07:40.899039 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:07:40.899048 | orchestrator | 2025-08-29 15:07:40.899058 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-08-29 15:07:40.899081 | orchestrator | Friday 29 August 2025 15:06:52 +0000 (0:00:03.423) 0:00:23.178 ********* 2025-08-29 15:07:40.899090 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-08-29 15:07:40.899099 | orchestrator | 2025-08-29 15:07:40.899108 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 15:07:40.899117 | orchestrator | Friday 29 August 2025 15:06:57 +0000 (0:00:04.394) 0:00:27.573 ********* 2025-08-29 15:07:40.899126 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:40.899135 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:40.899144 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:40.899152 | orchestrator | 2025-08-29 15:07:40.899160 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-08-29 15:07:40.899168 | orchestrator | Friday 29 August 2025 15:06:57 +0000 (0:00:00.399) 0:00:27.972 ********* 2025-08-29 15:07:40.899180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.899214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.899224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.899232 | orchestrator | 2025-08-29 15:07:40.899240 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-08-29 15:07:40.899248 | orchestrator | Friday 29 August 2025 15:06:58 +0000 (0:00:00.889) 0:00:28.861 ********* 2025-08-29 15:07:40.899256 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:40.899264 | orchestrator | 2025-08-29 15:07:40.899272 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-08-29 15:07:40.899415 | orchestrator | Friday 29 August 2025 15:06:58 +0000 (0:00:00.147) 0:00:29.008 ********* 2025-08-29 15:07:40.899426 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:40.899435 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:40.899443 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:40.899450 | orchestrator | 2025-08-29 15:07:40.899458 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 15:07:40.899466 | orchestrator | Friday 29 August 2025 15:06:59 +0000 (0:00:00.548) 0:00:29.557 ********* 2025-08-29 15:07:40.899474 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:07:40.899482 | orchestrator | 2025-08-29 15:07:40.899497 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-08-29 15:07:40.899505 | orchestrator | Friday 29 August 2025 15:06:59 +0000 (0:00:00.755) 0:00:30.313 ********* 2025-08-29 15:07:40.899514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.899541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.899550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.899558 | orchestrator | 2025-08-29 15:07:40.899566 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-08-29 15:07:40.899574 | orchestrator | Friday 29 August 2025 15:07:01 +0000 (0:00:01.539) 0:00:31.852 ********* 2025-08-29 15:07:40.899582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:07:40.899590 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:40.899603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:07:40.899617 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:40.899630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:07:40.899638 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:40.899646 | orchestrator | 2025-08-29 15:07:40.899655 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-08-29 15:07:40.899662 | orchestrator | Friday 29 August 2025 15:07:02 +0000 (0:00:00.919) 0:00:32.772 ********* 2025-08-29 15:07:40.899671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:07:40.899679 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:40.899687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:07:40.899696 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:40.899707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:07:40.899721 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:40.899729 | orchestrator | 2025-08-29 15:07:40.899737 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-08-29 15:07:40.899745 | orchestrator | Friday 29 August 2025 15:07:04 +0000 (0:00:02.041) 0:00:34.813 ********* 2025-08-29 15:07:40.899757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.899766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.899774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.899783 | orchestrator | 2025-08-29 15:07:40.899791 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-08-29 15:07:40.899799 | orchestrator | Friday 29 August 2025 15:07:06 +0000 (0:00:01.783) 0:00:36.597 ********* 2025-08-29 15:07:40.899811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.899824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.899838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.899847 | orchestrator | 2025-08-29 15:07:40.899855 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-08-29 15:07:40.899863 | orchestrator | Friday 29 August 2025 15:07:09 +0000 (0:00:03.029) 0:00:39.627 ********* 2025-08-29 15:07:40.899877 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 15:07:40.899891 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 15:07:40.899904 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 15:07:40.899916 | orchestrator | 2025-08-29 15:07:40.899930 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-08-29 15:07:40.899942 | orchestrator | Friday 29 August 2025 15:07:11 +0000 (0:00:01.787) 0:00:41.414 ********* 2025-08-29 15:07:40.899956 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:40.899970 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:07:40.899983 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:07:40.899996 | orchestrator | 2025-08-29 15:07:40.900010 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-08-29 15:07:40.900020 | orchestrator | Friday 29 August 2025 15:07:12 +0000 (0:00:01.415) 0:00:42.829 ********* 2025-08-29 15:07:40.900030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:07:40.900046 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:40.900060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:07:40.900070 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:40.900086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:07:40.900096 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:40.900106 | orchestrator | 2025-08-29 15:07:40.900115 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-08-29 15:07:40.900124 | orchestrator | Friday 29 August 2025 15:07:13 +0000 (0:00:00.675) 0:00:43.505 ********* 2025-08-29 15:07:40.900134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.900143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.900166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:07:40.900176 | orchestrator | 2025-08-29 15:07:40.900184 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-08-29 15:07:40.900193 | orchestrator | Friday 29 August 2025 15:07:14 +0000 (0:00:01.537) 0:00:45.043 ********* 2025-08-29 15:07:40.900202 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:40.900211 | orchestrator | 2025-08-29 15:07:40.900220 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-08-29 15:07:40.900228 | orchestrator | Friday 29 August 2025 15:07:17 +0000 (0:00:02.637) 0:00:47.680 ********* 2025-08-29 15:07:40.900238 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:40.900246 | orchestrator | 2025-08-29 15:07:40.900255 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-08-29 15:07:40.900264 | orchestrator | Friday 29 August 2025 15:07:19 +0000 (0:00:02.271) 0:00:49.952 ********* 2025-08-29 15:07:40.900273 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:40.900282 | orchestrator | 2025-08-29 15:07:40.900290 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 15:07:40.900299 | orchestrator | Friday 29 August 2025 15:07:34 +0000 (0:00:14.904) 0:01:04.857 ********* 2025-08-29 15:07:40.900323 | orchestrator | 2025-08-29 15:07:40.900340 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 15:07:40.900350 | orchestrator | Friday 29 August 2025 15:07:34 +0000 (0:00:00.061) 0:01:04.918 ********* 2025-08-29 15:07:40.900359 | orchestrator | 2025-08-29 15:07:40.900372 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 15:07:40.900382 | orchestrator | Friday 29 August 2025 15:07:34 +0000 (0:00:00.061) 0:01:04.979 ********* 2025-08-29 15:07:40.900390 | orchestrator | 2025-08-29 15:07:40.900430 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-08-29 15:07:40.900438 | orchestrator | Friday 29 August 2025 15:07:34 +0000 (0:00:00.063) 0:01:05.042 ********* 2025-08-29 15:07:40.900446 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:40.900454 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:07:40.900462 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:07:40.900470 | orchestrator | 2025-08-29 15:07:40.900478 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:07:40.900487 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:07:40.900503 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:07:40.900511 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:07:40.900519 | orchestrator | 2025-08-29 15:07:40.900527 | orchestrator | 2025-08-29 15:07:40.900534 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:07:40.900542 | orchestrator | Friday 29 August 2025 15:07:40 +0000 (0:00:05.575) 0:01:10.618 ********* 2025-08-29 15:07:40.900550 | orchestrator | =============================================================================== 2025-08-29 15:07:40.900558 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.90s 2025-08-29 15:07:40.900565 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.52s 2025-08-29 15:07:40.900573 | orchestrator | placement : Restart placement-api container ----------------------------- 5.58s 2025-08-29 15:07:40.900581 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.39s 2025-08-29 15:07:40.900588 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.07s 2025-08-29 15:07:40.900596 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.74s 2025-08-29 15:07:40.900604 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.42s 2025-08-29 15:07:40.900612 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.12s 2025-08-29 15:07:40.900619 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.03s 2025-08-29 15:07:40.900627 | orchestrator | placement : Creating placement databases -------------------------------- 2.64s 2025-08-29 15:07:40.900635 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.27s 2025-08-29 15:07:40.900643 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 2.04s 2025-08-29 15:07:40.900651 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.79s 2025-08-29 15:07:40.900658 | orchestrator | placement : Copying over config.json files for services ----------------- 1.78s 2025-08-29 15:07:40.900666 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.54s 2025-08-29 15:07:40.900674 | orchestrator | placement : Check placement containers ---------------------------------- 1.54s 2025-08-29 15:07:40.900686 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.42s 2025-08-29 15:07:40.900694 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.92s 2025-08-29 15:07:40.900702 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.89s 2025-08-29 15:07:40.900710 | orchestrator | placement : include_tasks ----------------------------------------------- 0.76s 2025-08-29 15:07:40.900717 | orchestrator | 2025-08-29 15:07:40 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:07:40.901643 | orchestrator | 2025-08-29 15:07:40 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:40.904185 | orchestrator | 2025-08-29 15:07:40 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:40.904237 | orchestrator | 2025-08-29 15:07:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:43.954646 | orchestrator | 2025-08-29 15:07:43 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:07:43.956526 | orchestrator | 2025-08-29 15:07:43 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:43.958103 | orchestrator | 2025-08-29 15:07:43 | INFO  | Task 35e7da70-c77a-480f-b5c8-cf8f8488be06 is in state STARTED 2025-08-29 15:07:43.961950 | orchestrator | 2025-08-29 15:07:43 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:43.962095 | orchestrator | 2025-08-29 15:07:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:46.999471 | orchestrator | 2025-08-29 15:07:46 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:07:46.999859 | orchestrator | 2025-08-29 15:07:47 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:47.002725 | orchestrator | 2025-08-29 15:07:47 | INFO  | Task 35e7da70-c77a-480f-b5c8-cf8f8488be06 is in state STARTED 2025-08-29 15:07:47.002762 | orchestrator | 2025-08-29 15:07:47 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:47.002769 | orchestrator | 2025-08-29 15:07:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:50.054873 | orchestrator | 2025-08-29 15:07:50 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:07:50.055151 | orchestrator | 2025-08-29 15:07:50 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:50.056386 | orchestrator | 2025-08-29 15:07:50 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:07:50.057698 | orchestrator | 2025-08-29 15:07:50 | INFO  | Task 35e7da70-c77a-480f-b5c8-cf8f8488be06 is in state SUCCESS 2025-08-29 15:07:50.059683 | orchestrator | 2025-08-29 15:07:50 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:50.059715 | orchestrator | 2025-08-29 15:07:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:53.110357 | orchestrator | 2025-08-29 15:07:53 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:07:53.114513 | orchestrator | 2025-08-29 15:07:53 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:53.115363 | orchestrator | 2025-08-29 15:07:53 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:07:53.116277 | orchestrator | 2025-08-29 15:07:53 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:53.116311 | orchestrator | 2025-08-29 15:07:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:56.158122 | orchestrator | 2025-08-29 15:07:56 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:07:56.160696 | orchestrator | 2025-08-29 15:07:56 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:56.163037 | orchestrator | 2025-08-29 15:07:56 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:07:56.164327 | orchestrator | 2025-08-29 15:07:56 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:56.164379 | orchestrator | 2025-08-29 15:07:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:59.207548 | orchestrator | 2025-08-29 15:07:59 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:07:59.209095 | orchestrator | 2025-08-29 15:07:59 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:07:59.209955 | orchestrator | 2025-08-29 15:07:59 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:07:59.211151 | orchestrator | 2025-08-29 15:07:59 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:07:59.211190 | orchestrator | 2025-08-29 15:07:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:02.272816 | orchestrator | 2025-08-29 15:08:02 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:02.276696 | orchestrator | 2025-08-29 15:08:02 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:02.281348 | orchestrator | 2025-08-29 15:08:02 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:02.284095 | orchestrator | 2025-08-29 15:08:02 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:02.284187 | orchestrator | 2025-08-29 15:08:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:05.331889 | orchestrator | 2025-08-29 15:08:05 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:05.332132 | orchestrator | 2025-08-29 15:08:05 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:05.333821 | orchestrator | 2025-08-29 15:08:05 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:05.334848 | orchestrator | 2025-08-29 15:08:05 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:05.334882 | orchestrator | 2025-08-29 15:08:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:08.391779 | orchestrator | 2025-08-29 15:08:08 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:08.392598 | orchestrator | 2025-08-29 15:08:08 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:08.393947 | orchestrator | 2025-08-29 15:08:08 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:08.395286 | orchestrator | 2025-08-29 15:08:08 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:08.395336 | orchestrator | 2025-08-29 15:08:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:11.433193 | orchestrator | 2025-08-29 15:08:11 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:11.434586 | orchestrator | 2025-08-29 15:08:11 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:11.437801 | orchestrator | 2025-08-29 15:08:11 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:11.441363 | orchestrator | 2025-08-29 15:08:11 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:11.441458 | orchestrator | 2025-08-29 15:08:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:14.484087 | orchestrator | 2025-08-29 15:08:14 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:14.485136 | orchestrator | 2025-08-29 15:08:14 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:14.497740 | orchestrator | 2025-08-29 15:08:14 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:14.497829 | orchestrator | 2025-08-29 15:08:14 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:14.497841 | orchestrator | 2025-08-29 15:08:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:17.533265 | orchestrator | 2025-08-29 15:08:17 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:17.535162 | orchestrator | 2025-08-29 15:08:17 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:17.538234 | orchestrator | 2025-08-29 15:08:17 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:17.539831 | orchestrator | 2025-08-29 15:08:17 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:17.539942 | orchestrator | 2025-08-29 15:08:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:20.582968 | orchestrator | 2025-08-29 15:08:20 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:20.583092 | orchestrator | 2025-08-29 15:08:20 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:20.583832 | orchestrator | 2025-08-29 15:08:20 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:20.585326 | orchestrator | 2025-08-29 15:08:20 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:20.585377 | orchestrator | 2025-08-29 15:08:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:23.730604 | orchestrator | 2025-08-29 15:08:23 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:23.730764 | orchestrator | 2025-08-29 15:08:23 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:23.731443 | orchestrator | 2025-08-29 15:08:23 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:23.732211 | orchestrator | 2025-08-29 15:08:23 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:23.732246 | orchestrator | 2025-08-29 15:08:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:26.763009 | orchestrator | 2025-08-29 15:08:26 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:26.763230 | orchestrator | 2025-08-29 15:08:26 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:26.763947 | orchestrator | 2025-08-29 15:08:26 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:26.766190 | orchestrator | 2025-08-29 15:08:26 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:26.766254 | orchestrator | 2025-08-29 15:08:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:29.799467 | orchestrator | 2025-08-29 15:08:29 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:29.802316 | orchestrator | 2025-08-29 15:08:29 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:29.802750 | orchestrator | 2025-08-29 15:08:29 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:29.803609 | orchestrator | 2025-08-29 15:08:29 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:29.803646 | orchestrator | 2025-08-29 15:08:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:32.827855 | orchestrator | 2025-08-29 15:08:32 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:32.828002 | orchestrator | 2025-08-29 15:08:32 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:32.828867 | orchestrator | 2025-08-29 15:08:32 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:32.829651 | orchestrator | 2025-08-29 15:08:32 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:32.829675 | orchestrator | 2025-08-29 15:08:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:35.889027 | orchestrator | 2025-08-29 15:08:35 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:35.889783 | orchestrator | 2025-08-29 15:08:35 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:35.890881 | orchestrator | 2025-08-29 15:08:35 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:35.891857 | orchestrator | 2025-08-29 15:08:35 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:35.891886 | orchestrator | 2025-08-29 15:08:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:38.933432 | orchestrator | 2025-08-29 15:08:38 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:38.934848 | orchestrator | 2025-08-29 15:08:38 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:38.938463 | orchestrator | 2025-08-29 15:08:38 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:38.939225 | orchestrator | 2025-08-29 15:08:38 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:38.939244 | orchestrator | 2025-08-29 15:08:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:41.997157 | orchestrator | 2025-08-29 15:08:41 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:41.997261 | orchestrator | 2025-08-29 15:08:41 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:41.997970 | orchestrator | 2025-08-29 15:08:41 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:41.998961 | orchestrator | 2025-08-29 15:08:41 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:41.999038 | orchestrator | 2025-08-29 15:08:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:45.034773 | orchestrator | 2025-08-29 15:08:45 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:45.036687 | orchestrator | 2025-08-29 15:08:45 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:45.039448 | orchestrator | 2025-08-29 15:08:45 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:45.041664 | orchestrator | 2025-08-29 15:08:45 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:45.041729 | orchestrator | 2025-08-29 15:08:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:48.084233 | orchestrator | 2025-08-29 15:08:48 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:48.086195 | orchestrator | 2025-08-29 15:08:48 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:48.087917 | orchestrator | 2025-08-29 15:08:48 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:48.089417 | orchestrator | 2025-08-29 15:08:48 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:48.090287 | orchestrator | 2025-08-29 15:08:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:51.140502 | orchestrator | 2025-08-29 15:08:51 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:51.143111 | orchestrator | 2025-08-29 15:08:51 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:51.144926 | orchestrator | 2025-08-29 15:08:51 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:51.148422 | orchestrator | 2025-08-29 15:08:51 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:51.148473 | orchestrator | 2025-08-29 15:08:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:54.190577 | orchestrator | 2025-08-29 15:08:54 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:54.192218 | orchestrator | 2025-08-29 15:08:54 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:54.194774 | orchestrator | 2025-08-29 15:08:54 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:54.196696 | orchestrator | 2025-08-29 15:08:54 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:54.196953 | orchestrator | 2025-08-29 15:08:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:57.241576 | orchestrator | 2025-08-29 15:08:57 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:08:57.242741 | orchestrator | 2025-08-29 15:08:57 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:08:57.244142 | orchestrator | 2025-08-29 15:08:57 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:08:57.245886 | orchestrator | 2025-08-29 15:08:57 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:08:57.245938 | orchestrator | 2025-08-29 15:08:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:00.285334 | orchestrator | 2025-08-29 15:09:00 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:09:00.286930 | orchestrator | 2025-08-29 15:09:00 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:00.289507 | orchestrator | 2025-08-29 15:09:00 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:00.289570 | orchestrator | 2025-08-29 15:09:00 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:00.289577 | orchestrator | 2025-08-29 15:09:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:03.331151 | orchestrator | 2025-08-29 15:09:03 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:09:03.331275 | orchestrator | 2025-08-29 15:09:03 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:03.331434 | orchestrator | 2025-08-29 15:09:03 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:03.332873 | orchestrator | 2025-08-29 15:09:03 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:03.332959 | orchestrator | 2025-08-29 15:09:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:06.378604 | orchestrator | 2025-08-29 15:09:06 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:09:06.380469 | orchestrator | 2025-08-29 15:09:06 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:06.382476 | orchestrator | 2025-08-29 15:09:06 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:06.383813 | orchestrator | 2025-08-29 15:09:06 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:06.384061 | orchestrator | 2025-08-29 15:09:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:09.427746 | orchestrator | 2025-08-29 15:09:09 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:09:09.428197 | orchestrator | 2025-08-29 15:09:09 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:09.429581 | orchestrator | 2025-08-29 15:09:09 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:09.431201 | orchestrator | 2025-08-29 15:09:09 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:09.431816 | orchestrator | 2025-08-29 15:09:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:12.481027 | orchestrator | 2025-08-29 15:09:12 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:09:12.481646 | orchestrator | 2025-08-29 15:09:12 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:12.484196 | orchestrator | 2025-08-29 15:09:12 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:12.486829 | orchestrator | 2025-08-29 15:09:12 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:12.486904 | orchestrator | 2025-08-29 15:09:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:15.535043 | orchestrator | 2025-08-29 15:09:15 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:09:15.536526 | orchestrator | 2025-08-29 15:09:15 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:15.538570 | orchestrator | 2025-08-29 15:09:15 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:15.540511 | orchestrator | 2025-08-29 15:09:15 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:15.540545 | orchestrator | 2025-08-29 15:09:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:18.587181 | orchestrator | 2025-08-29 15:09:18 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:09:18.588307 | orchestrator | 2025-08-29 15:09:18 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:18.590071 | orchestrator | 2025-08-29 15:09:18 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:18.591755 | orchestrator | 2025-08-29 15:09:18 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:18.591803 | orchestrator | 2025-08-29 15:09:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:21.641001 | orchestrator | 2025-08-29 15:09:21 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:09:21.644887 | orchestrator | 2025-08-29 15:09:21 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:21.647704 | orchestrator | 2025-08-29 15:09:21 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:21.648746 | orchestrator | 2025-08-29 15:09:21 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:21.648771 | orchestrator | 2025-08-29 15:09:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:24.685414 | orchestrator | 2025-08-29 15:09:24 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:09:24.689267 | orchestrator | 2025-08-29 15:09:24 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:24.689878 | orchestrator | 2025-08-29 15:09:24 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:24.690898 | orchestrator | 2025-08-29 15:09:24 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:24.690927 | orchestrator | 2025-08-29 15:09:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:27.739305 | orchestrator | 2025-08-29 15:09:27 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:09:27.741899 | orchestrator | 2025-08-29 15:09:27 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:27.742652 | orchestrator | 2025-08-29 15:09:27 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:27.743977 | orchestrator | 2025-08-29 15:09:27 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:27.744040 | orchestrator | 2025-08-29 15:09:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:30.790110 | orchestrator | 2025-08-29 15:09:30 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:09:30.792771 | orchestrator | 2025-08-29 15:09:30 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:30.793501 | orchestrator | 2025-08-29 15:09:30 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:30.795760 | orchestrator | 2025-08-29 15:09:30 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:30.796028 | orchestrator | 2025-08-29 15:09:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:33.844758 | orchestrator | 2025-08-29 15:09:33 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state STARTED 2025-08-29 15:09:33.846742 | orchestrator | 2025-08-29 15:09:33 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:33.848141 | orchestrator | 2025-08-29 15:09:33 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:33.848863 | orchestrator | 2025-08-29 15:09:33 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:33.848901 | orchestrator | 2025-08-29 15:09:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:36.892794 | orchestrator | 2025-08-29 15:09:36 | INFO  | Task f2958a84-af8f-4950-b973-89e2d154a5ab is in state SUCCESS 2025-08-29 15:09:36.893957 | orchestrator | 2025-08-29 15:09:36.894055 | orchestrator | 2025-08-29 15:09:36.894065 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:09:36.894072 | orchestrator | 2025-08-29 15:09:36.894078 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:09:36.894083 | orchestrator | Friday 29 August 2025 15:07:45 +0000 (0:00:00.228) 0:00:00.228 ********* 2025-08-29 15:09:36.894089 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:09:36.894096 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:09:36.894101 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:09:36.894106 | orchestrator | 2025-08-29 15:09:36.894112 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:09:36.894117 | orchestrator | Friday 29 August 2025 15:07:45 +0000 (0:00:00.310) 0:00:00.539 ********* 2025-08-29 15:09:36.894122 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 15:09:36.894128 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 15:09:36.894133 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 15:09:36.894138 | orchestrator | 2025-08-29 15:09:36.894144 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-08-29 15:09:36.894149 | orchestrator | 2025-08-29 15:09:36.894154 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-08-29 15:09:36.894160 | orchestrator | Friday 29 August 2025 15:07:46 +0000 (0:00:00.863) 0:00:01.402 ********* 2025-08-29 15:09:36.894168 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:09:36.894176 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:09:36.894184 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:09:36.894192 | orchestrator | 2025-08-29 15:09:36.894200 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:09:36.894209 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:09:36.894220 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:09:36.894259 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:09:36.894268 | orchestrator | 2025-08-29 15:09:36.894276 | orchestrator | 2025-08-29 15:09:36.894283 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:09:36.894430 | orchestrator | Friday 29 August 2025 15:07:47 +0000 (0:00:00.752) 0:00:02.154 ********* 2025-08-29 15:09:36.894444 | orchestrator | =============================================================================== 2025-08-29 15:09:36.894453 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.86s 2025-08-29 15:09:36.894461 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.75s 2025-08-29 15:09:36.894471 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-08-29 15:09:36.894479 | orchestrator | 2025-08-29 15:09:36.894487 | orchestrator | 2025-08-29 15:09:36.894495 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:09:36.894504 | orchestrator | 2025-08-29 15:09:36.894513 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:09:36.894522 | orchestrator | Friday 29 August 2025 15:07:30 +0000 (0:00:00.271) 0:00:00.271 ********* 2025-08-29 15:09:36.894548 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:09:36.894558 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:09:36.894568 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:09:36.894574 | orchestrator | 2025-08-29 15:09:36.894581 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:09:36.894586 | orchestrator | Friday 29 August 2025 15:07:30 +0000 (0:00:00.278) 0:00:00.549 ********* 2025-08-29 15:09:36.894592 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-08-29 15:09:36.894598 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-08-29 15:09:36.894604 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-08-29 15:09:36.894610 | orchestrator | 2025-08-29 15:09:36.894616 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-08-29 15:09:36.894622 | orchestrator | 2025-08-29 15:09:36.894628 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 15:09:36.894634 | orchestrator | Friday 29 August 2025 15:07:30 +0000 (0:00:00.425) 0:00:00.975 ********* 2025-08-29 15:09:36.894639 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:09:36.894645 | orchestrator | 2025-08-29 15:09:36.894651 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-08-29 15:09:36.894657 | orchestrator | Friday 29 August 2025 15:07:31 +0000 (0:00:00.500) 0:00:01.475 ********* 2025-08-29 15:09:36.894664 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-08-29 15:09:36.894669 | orchestrator | 2025-08-29 15:09:36.894675 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-08-29 15:09:36.894681 | orchestrator | Friday 29 August 2025 15:07:35 +0000 (0:00:03.586) 0:00:05.062 ********* 2025-08-29 15:09:36.894687 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-08-29 15:09:36.894693 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-08-29 15:09:36.894699 | orchestrator | 2025-08-29 15:09:36.894705 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-08-29 15:09:36.894710 | orchestrator | Friday 29 August 2025 15:07:41 +0000 (0:00:06.562) 0:00:11.624 ********* 2025-08-29 15:09:36.894715 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:09:36.894721 | orchestrator | 2025-08-29 15:09:36.894726 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-08-29 15:09:36.894731 | orchestrator | Friday 29 August 2025 15:07:44 +0000 (0:00:03.256) 0:00:14.881 ********* 2025-08-29 15:09:36.894751 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:09:36.894765 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-08-29 15:09:36.894770 | orchestrator | 2025-08-29 15:09:36.894776 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-08-29 15:09:36.894781 | orchestrator | Friday 29 August 2025 15:07:48 +0000 (0:00:04.043) 0:00:18.925 ********* 2025-08-29 15:09:36.894786 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:09:36.894792 | orchestrator | 2025-08-29 15:09:36.894797 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-08-29 15:09:36.894802 | orchestrator | Friday 29 August 2025 15:07:52 +0000 (0:00:03.433) 0:00:22.358 ********* 2025-08-29 15:09:36.894807 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-08-29 15:09:36.894812 | orchestrator | 2025-08-29 15:09:36.894817 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-08-29 15:09:36.894822 | orchestrator | Friday 29 August 2025 15:07:56 +0000 (0:00:04.432) 0:00:26.791 ********* 2025-08-29 15:09:36.894827 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:36.894832 | orchestrator | 2025-08-29 15:09:36.894837 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-08-29 15:09:36.894842 | orchestrator | Friday 29 August 2025 15:08:00 +0000 (0:00:03.321) 0:00:30.112 ********* 2025-08-29 15:09:36.894847 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:36.894852 | orchestrator | 2025-08-29 15:09:36.894857 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-08-29 15:09:36.894862 | orchestrator | Friday 29 August 2025 15:08:04 +0000 (0:00:04.094) 0:00:34.206 ********* 2025-08-29 15:09:36.894868 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:36.894873 | orchestrator | 2025-08-29 15:09:36.894878 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-08-29 15:09:36.894883 | orchestrator | Friday 29 August 2025 15:08:08 +0000 (0:00:03.897) 0:00:38.104 ********* 2025-08-29 15:09:36.894892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.894906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.894912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.894929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.894936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.894942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.894947 | orchestrator | 2025-08-29 15:09:36.894953 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-08-29 15:09:36.894958 | orchestrator | Friday 29 August 2025 15:08:09 +0000 (0:00:01.650) 0:00:39.754 ********* 2025-08-29 15:09:36.894967 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:36.894972 | orchestrator | 2025-08-29 15:09:36.894977 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-08-29 15:09:36.894982 | orchestrator | Friday 29 August 2025 15:08:10 +0000 (0:00:00.312) 0:00:40.067 ********* 2025-08-29 15:09:36.894988 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:36.894993 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:36.894998 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:36.895004 | orchestrator | 2025-08-29 15:09:36.895009 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-08-29 15:09:36.895014 | orchestrator | Friday 29 August 2025 15:08:10 +0000 (0:00:00.623) 0:00:40.690 ********* 2025-08-29 15:09:36.895019 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:09:36.895024 | orchestrator | 2025-08-29 15:09:36.895037 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-08-29 15:09:36.895042 | orchestrator | Friday 29 August 2025 15:08:11 +0000 (0:00:01.046) 0:00:41.736 ********* 2025-08-29 15:09:36.895048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.895063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.895071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.895085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.895094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.895109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.895119 | orchestrator | 2025-08-29 15:09:36.895128 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-08-29 15:09:36.895136 | orchestrator | Friday 29 August 2025 15:08:14 +0000 (0:00:02.809) 0:00:44.546 ********* 2025-08-29 15:09:36.895144 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:09:36.895151 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:09:36.895160 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:09:36.895167 | orchestrator | 2025-08-29 15:09:36.895180 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 15:09:36.895189 | orchestrator | Friday 29 August 2025 15:08:15 +0000 (0:00:00.716) 0:00:45.263 ********* 2025-08-29 15:09:36.895198 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:09:36.895208 | orchestrator | 2025-08-29 15:09:36.895213 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-08-29 15:09:36.895218 | orchestrator | Friday 29 August 2025 15:08:16 +0000 (0:00:00.992) 0:00:46.255 ********* 2025-08-29 15:09:36.895224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.895230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.895244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.895250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.895263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.895269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.895274 | orchestrator | 2025-08-29 15:09:36.895280 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-08-29 15:09:36.895285 | orchestrator | Friday 29 August 2025 15:08:19 +0000 (0:00:02.832) 0:00:49.087 ********* 2025-08-29 15:09:36.895291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:09:36.895311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:09:36.895319 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:36.895328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:09:36.895408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:09:36.895421 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:36.895430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:09:36.895438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:09:36.895453 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:36.895461 | orchestrator | 2025-08-29 15:09:36.895474 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-08-29 15:09:36.895482 | orchestrator | Friday 29 August 2025 15:08:19 +0000 (0:00:00.799) 0:00:49.887 ********* 2025-08-29 15:09:36.895489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:09:36.895498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:09:36.895510 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:36.895520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:09:36.895528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:09:36.895544 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:36.895557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:09:36.895566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:09:36.895574 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:36.895583 | orchestrator | 2025-08-29 15:09:36.895591 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-08-29 15:09:36.895602 | orchestrator | Friday 29 August 2025 15:08:21 +0000 (0:00:01.141) 0:00:51.029 ********* 2025-08-29 15:09:36.895865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.895885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.895904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.895919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.895928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.895944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.895952 | orchestrator | 2025-08-29 15:09:36.895961 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-08-29 15:09:36.895969 | orchestrator | Friday 29 August 2025 15:08:24 +0000 (0:00:03.349) 0:00:54.379 ********* 2025-08-29 15:09:36.895978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.895993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.896008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.896015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.896027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.896033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.896044 | orchestrator | 2025-08-29 15:09:36.896049 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-08-29 15:09:36.896054 | orchestrator | Friday 29 August 2025 15:08:36 +0000 (0:00:12.410) 0:01:06.789 ********* 2025-08-29 15:09:36.896060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:09:36.896069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:09:36.896075 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:36.896080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:09:36.896090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:09:36.896101 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:36.896106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:09:36.896112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:09:36.896121 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:36.896127 | orchestrator | 2025-08-29 15:09:36.896132 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-08-29 15:09:36.896138 | orchestrator | Friday 29 August 2025 15:08:39 +0000 (0:00:02.512) 0:01:09.302 ********* 2025-08-29 15:09:36.896143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.896152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.896158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:09:36.896168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.896181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.896187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:09:36.896192 | orchestrator | 2025-08-29 15:09:36.896197 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 15:09:36.896203 | orchestrator | Friday 29 August 2025 15:08:45 +0000 (0:00:05.773) 0:01:15.075 ********* 2025-08-29 15:09:36.896208 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:36.896213 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:36.896218 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:36.896223 | orchestrator | 2025-08-29 15:09:36.896229 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-08-29 15:09:36.896234 | orchestrator | Friday 29 August 2025 15:08:45 +0000 (0:00:00.594) 0:01:15.669 ********* 2025-08-29 15:09:36.896239 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:36.896244 | orchestrator | 2025-08-29 15:09:36.896249 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-08-29 15:09:36.896255 | orchestrator | Friday 29 August 2025 15:08:47 +0000 (0:00:02.232) 0:01:17.901 ********* 2025-08-29 15:09:36.896264 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:36.896270 | orchestrator | 2025-08-29 15:09:36.896275 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-08-29 15:09:36.896281 | orchestrator | Friday 29 August 2025 15:08:50 +0000 (0:00:02.458) 0:01:20.359 ********* 2025-08-29 15:09:36.896306 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:36.896320 | orchestrator | 2025-08-29 15:09:36.896326 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 15:09:36.896331 | orchestrator | Friday 29 August 2025 15:09:06 +0000 (0:00:15.839) 0:01:36.199 ********* 2025-08-29 15:09:36.896337 | orchestrator | 2025-08-29 15:09:36.896432 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 15:09:36.896439 | orchestrator | Friday 29 August 2025 15:09:06 +0000 (0:00:00.080) 0:01:36.279 ********* 2025-08-29 15:09:36.896444 | orchestrator | 2025-08-29 15:09:36.896449 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 15:09:36.896454 | orchestrator | Friday 29 August 2025 15:09:06 +0000 (0:00:00.084) 0:01:36.363 ********* 2025-08-29 15:09:36.896460 | orchestrator | 2025-08-29 15:09:36.896465 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-08-29 15:09:36.896470 | orchestrator | Friday 29 August 2025 15:09:06 +0000 (0:00:00.075) 0:01:36.439 ********* 2025-08-29 15:09:36.896475 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:36.896480 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:09:36.896485 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:09:36.896490 | orchestrator | 2025-08-29 15:09:36.896495 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-08-29 15:09:36.896502 | orchestrator | Friday 29 August 2025 15:09:20 +0000 (0:00:13.899) 0:01:50.339 ********* 2025-08-29 15:09:36.896508 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:36.896514 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:09:36.896520 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:09:36.896526 | orchestrator | 2025-08-29 15:09:36.896532 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:09:36.896539 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:09:36.896545 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:09:36.896552 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:09:36.896557 | orchestrator | 2025-08-29 15:09:36.896563 | orchestrator | 2025-08-29 15:09:36.896569 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:09:36.896576 | orchestrator | Friday 29 August 2025 15:09:33 +0000 (0:00:13.212) 0:02:03.551 ********* 2025-08-29 15:09:36.896582 | orchestrator | =============================================================================== 2025-08-29 15:09:36.896587 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.84s 2025-08-29 15:09:36.896592 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.90s 2025-08-29 15:09:36.896597 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 13.21s 2025-08-29 15:09:36.896603 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 12.41s 2025-08-29 15:09:36.896616 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.56s 2025-08-29 15:09:36.896625 | orchestrator | magnum : Check magnum containers ---------------------------------------- 5.77s 2025-08-29 15:09:36.896633 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.43s 2025-08-29 15:09:36.896641 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.09s 2025-08-29 15:09:36.896649 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.04s 2025-08-29 15:09:36.896667 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.90s 2025-08-29 15:09:36.896675 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.59s 2025-08-29 15:09:36.896685 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.43s 2025-08-29 15:09:36.896694 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.35s 2025-08-29 15:09:36.896704 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.32s 2025-08-29 15:09:36.896732 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.26s 2025-08-29 15:09:36.896739 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.83s 2025-08-29 15:09:36.896744 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.81s 2025-08-29 15:09:36.896749 | orchestrator | magnum : Copying over existing policy file ------------------------------ 2.51s 2025-08-29 15:09:36.896754 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.46s 2025-08-29 15:09:36.896759 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.23s 2025-08-29 15:09:36.896765 | orchestrator | 2025-08-29 15:09:36 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:36.896771 | orchestrator | 2025-08-29 15:09:36 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:36.896845 | orchestrator | 2025-08-29 15:09:36 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:36.900097 | orchestrator | 2025-08-29 15:09:36 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:09:36.900158 | orchestrator | 2025-08-29 15:09:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:39.952505 | orchestrator | 2025-08-29 15:09:39 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:39.953288 | orchestrator | 2025-08-29 15:09:39 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:39.956739 | orchestrator | 2025-08-29 15:09:39 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:39.959298 | orchestrator | 2025-08-29 15:09:39 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:09:39.959406 | orchestrator | 2025-08-29 15:09:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:42.993850 | orchestrator | 2025-08-29 15:09:42 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:42.993951 | orchestrator | 2025-08-29 15:09:42 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:42.995007 | orchestrator | 2025-08-29 15:09:42 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:42.996710 | orchestrator | 2025-08-29 15:09:42 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:09:42.996756 | orchestrator | 2025-08-29 15:09:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:46.044542 | orchestrator | 2025-08-29 15:09:46 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:46.045327 | orchestrator | 2025-08-29 15:09:46 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:46.047074 | orchestrator | 2025-08-29 15:09:46 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:46.048989 | orchestrator | 2025-08-29 15:09:46 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:09:46.049037 | orchestrator | 2025-08-29 15:09:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:49.092067 | orchestrator | 2025-08-29 15:09:49 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:49.094196 | orchestrator | 2025-08-29 15:09:49 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:49.097777 | orchestrator | 2025-08-29 15:09:49 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state STARTED 2025-08-29 15:09:49.102138 | orchestrator | 2025-08-29 15:09:49 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:09:49.102205 | orchestrator | 2025-08-29 15:09:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:52.152767 | orchestrator | 2025-08-29 15:09:52 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:52.154295 | orchestrator | 2025-08-29 15:09:52 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:52.155458 | orchestrator | 2025-08-29 15:09:52 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:09:52.159086 | orchestrator | 2025-08-29 15:09:52.159144 | orchestrator | 2025-08-29 15:09:52 | INFO  | Task 2c9fd878-4f1b-4c2f-b6be-4d549ead6bd7 is in state SUCCESS 2025-08-29 15:09:52.160736 | orchestrator | 2025-08-29 15:09:52.160789 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:09:52.160799 | orchestrator | 2025-08-29 15:09:52.160806 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:09:52.160814 | orchestrator | Friday 29 August 2025 15:04:08 +0000 (0:00:00.609) 0:00:00.609 ********* 2025-08-29 15:09:52.160820 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:09:52.160828 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:09:52.160834 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:09:52.160840 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:09:52.160846 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:09:52.160852 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:09:52.160858 | orchestrator | 2025-08-29 15:09:52.160865 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:09:52.160871 | orchestrator | Friday 29 August 2025 15:04:09 +0000 (0:00:01.165) 0:00:01.775 ********* 2025-08-29 15:09:52.160878 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-08-29 15:09:52.160885 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-08-29 15:09:52.160891 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-08-29 15:09:52.160897 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-08-29 15:09:52.160904 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-08-29 15:09:52.160910 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-08-29 15:09:52.160916 | orchestrator | 2025-08-29 15:09:52.160923 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-08-29 15:09:52.160930 | orchestrator | 2025-08-29 15:09:52.160937 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:09:52.160944 | orchestrator | Friday 29 August 2025 15:04:10 +0000 (0:00:00.695) 0:00:02.471 ********* 2025-08-29 15:09:52.160952 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:09:52.160969 | orchestrator | 2025-08-29 15:09:52.160975 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-08-29 15:09:52.160981 | orchestrator | Friday 29 August 2025 15:04:11 +0000 (0:00:01.027) 0:00:03.499 ********* 2025-08-29 15:09:52.160987 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:09:52.160991 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:09:52.160995 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:09:52.161000 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:09:52.161004 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:09:52.161008 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:09:52.161032 | orchestrator | 2025-08-29 15:09:52.161037 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-08-29 15:09:52.161041 | orchestrator | Friday 29 August 2025 15:04:12 +0000 (0:00:01.069) 0:00:04.569 ********* 2025-08-29 15:09:52.161045 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:09:52.161049 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:09:52.161053 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:09:52.161057 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:09:52.161060 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:09:52.161064 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:09:52.161068 | orchestrator | 2025-08-29 15:09:52.161072 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-08-29 15:09:52.161076 | orchestrator | Friday 29 August 2025 15:04:13 +0000 (0:00:01.003) 0:00:05.572 ********* 2025-08-29 15:09:52.161080 | orchestrator | ok: [testbed-node-0] => { 2025-08-29 15:09:52.161085 | orchestrator |  "changed": false, 2025-08-29 15:09:52.161089 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:09:52.161093 | orchestrator | } 2025-08-29 15:09:52.161098 | orchestrator | ok: [testbed-node-1] => { 2025-08-29 15:09:52.161101 | orchestrator |  "changed": false, 2025-08-29 15:09:52.161106 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:09:52.161112 | orchestrator | } 2025-08-29 15:09:52.161119 | orchestrator | ok: [testbed-node-2] => { 2025-08-29 15:09:52.161125 | orchestrator |  "changed": false, 2025-08-29 15:09:52.161131 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:09:52.161137 | orchestrator | } 2025-08-29 15:09:52.161143 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 15:09:52.161149 | orchestrator |  "changed": false, 2025-08-29 15:09:52.161154 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:09:52.161160 | orchestrator | } 2025-08-29 15:09:52.161167 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 15:09:52.161173 | orchestrator |  "changed": false, 2025-08-29 15:09:52.161179 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:09:52.161184 | orchestrator | } 2025-08-29 15:09:52.161191 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 15:09:52.161197 | orchestrator |  "changed": false, 2025-08-29 15:09:52.161249 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:09:52.161254 | orchestrator | } 2025-08-29 15:09:52.161258 | orchestrator | 2025-08-29 15:09:52.161262 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-08-29 15:09:52.161266 | orchestrator | Friday 29 August 2025 15:04:14 +0000 (0:00:00.832) 0:00:06.405 ********* 2025-08-29 15:09:52.161292 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.161296 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.161300 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.161304 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.161308 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.161311 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.161315 | orchestrator | 2025-08-29 15:09:52.161319 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-08-29 15:09:52.161324 | orchestrator | Friday 29 August 2025 15:04:15 +0000 (0:00:00.654) 0:00:07.060 ********* 2025-08-29 15:09:52.161426 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-08-29 15:09:52.161436 | orchestrator | 2025-08-29 15:09:52.161443 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-08-29 15:09:52.161451 | orchestrator | Friday 29 August 2025 15:04:18 +0000 (0:00:03.519) 0:00:10.579 ********* 2025-08-29 15:09:52.161456 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-08-29 15:09:52.161462 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-08-29 15:09:52.161467 | orchestrator | 2025-08-29 15:09:52.161499 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-08-29 15:09:52.161508 | orchestrator | Friday 29 August 2025 15:04:24 +0000 (0:00:05.916) 0:00:16.496 ********* 2025-08-29 15:09:52.161543 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:09:52.161550 | orchestrator | 2025-08-29 15:09:52.161554 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-08-29 15:09:52.161559 | orchestrator | Friday 29 August 2025 15:04:27 +0000 (0:00:02.815) 0:00:19.311 ********* 2025-08-29 15:09:52.161563 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:09:52.161567 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-08-29 15:09:52.161572 | orchestrator | 2025-08-29 15:09:52.161576 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-08-29 15:09:52.161581 | orchestrator | Friday 29 August 2025 15:04:31 +0000 (0:00:03.629) 0:00:22.940 ********* 2025-08-29 15:09:52.161585 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:09:52.161590 | orchestrator | 2025-08-29 15:09:52.161594 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-08-29 15:09:52.161599 | orchestrator | Friday 29 August 2025 15:04:34 +0000 (0:00:03.151) 0:00:26.091 ********* 2025-08-29 15:09:52.161603 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-08-29 15:09:52.161607 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-08-29 15:09:52.161613 | orchestrator | 2025-08-29 15:09:52.161621 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:09:52.161628 | orchestrator | Friday 29 August 2025 15:04:41 +0000 (0:00:06.913) 0:00:33.004 ********* 2025-08-29 15:09:52.161634 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.161639 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.161644 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.161648 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.161653 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.161657 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.161661 | orchestrator | 2025-08-29 15:09:52.161666 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-08-29 15:09:52.161671 | orchestrator | Friday 29 August 2025 15:04:41 +0000 (0:00:00.830) 0:00:33.834 ********* 2025-08-29 15:09:52.161675 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.161680 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.161684 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.161688 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.161692 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.161697 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.161701 | orchestrator | 2025-08-29 15:09:52.161708 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-08-29 15:09:52.161714 | orchestrator | Friday 29 August 2025 15:04:44 +0000 (0:00:02.397) 0:00:36.231 ********* 2025-08-29 15:09:52.161721 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:09:52.161726 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:09:52.161732 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:09:52.161738 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:09:52.161744 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:09:52.161750 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:09:52.161824 | orchestrator | 2025-08-29 15:09:52.161832 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 15:09:52.161839 | orchestrator | Friday 29 August 2025 15:04:45 +0000 (0:00:01.625) 0:00:37.857 ********* 2025-08-29 15:09:52.161846 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.161853 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.161860 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.161867 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.161874 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.161881 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.161886 | orchestrator | 2025-08-29 15:09:52.161892 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-08-29 15:09:52.161900 | orchestrator | Friday 29 August 2025 15:04:49 +0000 (0:00:03.278) 0:00:41.136 ********* 2025-08-29 15:09:52.161918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.161955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.161966 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.161975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.161983 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.161997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.162003 | orchestrator | 2025-08-29 15:09:52.162009 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-08-29 15:09:52.162062 | orchestrator | Friday 29 August 2025 15:04:53 +0000 (0:00:04.706) 0:00:45.842 ********* 2025-08-29 15:09:52.162073 | orchestrator | [WARNING]: Skipped 2025-08-29 15:09:52.162078 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-08-29 15:09:52.162082 | orchestrator | due to this access issue: 2025-08-29 15:09:52.162086 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-08-29 15:09:52.162090 | orchestrator | a directory 2025-08-29 15:09:52.162095 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:09:52.162099 | orchestrator | 2025-08-29 15:09:52.162103 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:09:52.162114 | orchestrator | Friday 29 August 2025 15:04:55 +0000 (0:00:01.501) 0:00:47.343 ********* 2025-08-29 15:09:52.162119 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:09:52.162124 | orchestrator | 2025-08-29 15:09:52.162128 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-08-29 15:09:52.162132 | orchestrator | Friday 29 August 2025 15:04:58 +0000 (0:00:02.669) 0:00:50.012 ********* 2025-08-29 15:09:52.162136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.162140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.162151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.162158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.162167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.162171 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.162175 | orchestrator | 2025-08-29 15:09:52.162179 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-08-29 15:09:52.162183 | orchestrator | Friday 29 August 2025 15:05:03 +0000 (0:00:05.722) 0:00:55.735 ********* 2025-08-29 15:09:52.162190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.162200 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.162207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.162213 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.162223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.162233 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.162241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162247 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.162252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162263 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.162269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162277 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.162283 | orchestrator | 2025-08-29 15:09:52.162290 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-08-29 15:09:52.162296 | orchestrator | Friday 29 August 2025 15:05:07 +0000 (0:00:03.959) 0:00:59.695 ********* 2025-08-29 15:09:52.162303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.162311 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.162328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.162361 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.162368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.162383 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.162390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162397 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.162402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162409 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.162415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162421 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.162427 | orchestrator | 2025-08-29 15:09:52.162433 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-08-29 15:09:52.162439 | orchestrator | Friday 29 August 2025 15:05:13 +0000 (0:00:05.532) 0:01:05.227 ********* 2025-08-29 15:09:52.162446 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.162452 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.162458 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.162464 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.162470 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.162476 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.162482 | orchestrator | 2025-08-29 15:09:52.162488 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-08-29 15:09:52.162499 | orchestrator | Friday 29 August 2025 15:05:17 +0000 (0:00:04.600) 0:01:09.827 ********* 2025-08-29 15:09:52.162504 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.162508 | orchestrator | 2025-08-29 15:09:52.162512 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-08-29 15:09:52.162515 | orchestrator | Friday 29 August 2025 15:05:18 +0000 (0:00:00.156) 0:01:09.983 ********* 2025-08-29 15:09:52.162519 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.162523 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.162527 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.162531 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.162539 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.162543 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.162547 | orchestrator | 2025-08-29 15:09:52.162551 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-08-29 15:09:52.162555 | orchestrator | Friday 29 August 2025 15:05:19 +0000 (0:00:00.942) 0:01:10.926 ********* 2025-08-29 15:09:52.162559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.162563 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.162588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.162593 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.162597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162601 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.162607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162611 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.162619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.162626 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.162630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162634 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.162638 | orchestrator | 2025-08-29 15:09:52.162642 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-08-29 15:09:52.162646 | orchestrator | Friday 29 August 2025 15:05:23 +0000 (0:00:04.381) 0:01:15.308 ********* 2025-08-29 15:09:52.162650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.162654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.162666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.162676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.162683 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.162690 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.162696 | orchestrator | 2025-08-29 15:09:52.162703 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-08-29 15:09:52.162709 | orchestrator | Friday 29 August 2025 15:05:29 +0000 (0:00:06.366) 0:01:21.675 ********* 2025-08-29 15:09:52.162715 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.162731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.162740 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.162744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.162748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.162752 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.162759 | orchestrator | 2025-08-29 15:09:52.162765 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-08-29 15:09:52.162769 | orchestrator | Friday 29 August 2025 15:05:38 +0000 (0:00:08.527) 0:01:30.202 ********* 2025-08-29 15:09:52.162777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.162781 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.162786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.162790 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.162794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162798 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.162802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.162812 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.162819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162823 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.162830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162834 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.162838 | orchestrator | 2025-08-29 15:09:52.162842 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-08-29 15:09:52.162846 | orchestrator | Friday 29 August 2025 15:05:41 +0000 (0:00:03.193) 0:01:33.396 ********* 2025-08-29 15:09:52.162850 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:52.162853 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.162857 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.162861 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.162865 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:09:52.162869 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:09:52.162873 | orchestrator | 2025-08-29 15:09:52.162877 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-08-29 15:09:52.162880 | orchestrator | Friday 29 August 2025 15:05:45 +0000 (0:00:04.332) 0:01:37.729 ********* 2025-08-29 15:09:52.162884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162888 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.162892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162900 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.162903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.162910 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.162919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.162924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.162928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.162932 | orchestrator | 2025-08-29 15:09:52.162936 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-08-29 15:09:52.162940 | orchestrator | Friday 29 August 2025 15:05:50 +0000 (0:00:04.566) 0:01:42.296 ********* 2025-08-29 15:09:52.162951 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.162955 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.162958 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.162962 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.162966 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.162969 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.162973 | orchestrator | 2025-08-29 15:09:52.162977 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-08-29 15:09:52.162981 | orchestrator | Friday 29 August 2025 15:05:53 +0000 (0:00:02.989) 0:01:45.285 ********* 2025-08-29 15:09:52.162985 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.162989 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.162992 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.162996 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163000 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163004 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163007 | orchestrator | 2025-08-29 15:09:52.163011 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-08-29 15:09:52.163015 | orchestrator | Friday 29 August 2025 15:05:56 +0000 (0:00:02.714) 0:01:48.000 ********* 2025-08-29 15:09:52.163019 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163022 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163026 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163030 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163033 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163037 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163041 | orchestrator | 2025-08-29 15:09:52.163045 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-08-29 15:09:52.163048 | orchestrator | Friday 29 August 2025 15:05:58 +0000 (0:00:02.603) 0:01:50.604 ********* 2025-08-29 15:09:52.163052 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163056 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163062 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163066 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163070 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163074 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163078 | orchestrator | 2025-08-29 15:09:52.163082 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-08-29 15:09:52.163085 | orchestrator | Friday 29 August 2025 15:06:01 +0000 (0:00:03.219) 0:01:53.824 ********* 2025-08-29 15:09:52.163089 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163093 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163097 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163100 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163107 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163111 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163115 | orchestrator | 2025-08-29 15:09:52.163119 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-08-29 15:09:52.163123 | orchestrator | Friday 29 August 2025 15:06:05 +0000 (0:00:03.361) 0:01:57.186 ********* 2025-08-29 15:09:52.163127 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163130 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163134 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163138 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163142 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163145 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163149 | orchestrator | 2025-08-29 15:09:52.163153 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-08-29 15:09:52.163156 | orchestrator | Friday 29 August 2025 15:06:08 +0000 (0:00:02.948) 0:02:00.134 ********* 2025-08-29 15:09:52.163160 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:09:52.163168 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163172 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:09:52.163176 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163180 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:09:52.163184 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163187 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:09:52.163191 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163195 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:09:52.163199 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163203 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:09:52.163207 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163210 | orchestrator | 2025-08-29 15:09:52.163214 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-08-29 15:09:52.163218 | orchestrator | Friday 29 August 2025 15:06:11 +0000 (0:00:03.351) 0:02:03.486 ********* 2025-08-29 15:09:52.163222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.163226 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.163234 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.163251 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.163259 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.163267 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.163274 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163278 | orchestrator | 2025-08-29 15:09:52.163282 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-08-29 15:09:52.163286 | orchestrator | Friday 29 August 2025 15:06:15 +0000 (0:00:04.293) 0:02:07.779 ********* 2025-08-29 15:09:52.163292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.163296 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.163311 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.163320 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.163328 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.163374 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.163389 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163393 | orchestrator | 2025-08-29 15:09:52.163397 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-08-29 15:09:52.163401 | orchestrator | Friday 29 August 2025 15:06:19 +0000 (0:00:03.225) 0:02:11.004 ********* 2025-08-29 15:09:52.163405 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163412 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163416 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163420 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163423 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163427 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163431 | orchestrator | 2025-08-29 15:09:52.163435 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-08-29 15:09:52.163439 | orchestrator | Friday 29 August 2025 15:06:21 +0000 (0:00:02.841) 0:02:13.846 ********* 2025-08-29 15:09:52.163443 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163446 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163450 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163454 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:09:52.163458 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:09:52.163462 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:09:52.163466 | orchestrator | 2025-08-29 15:09:52.163470 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-08-29 15:09:52.163474 | orchestrator | Friday 29 August 2025 15:06:28 +0000 (0:00:06.430) 0:02:20.277 ********* 2025-08-29 15:09:52.163478 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163482 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163486 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163490 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163494 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163498 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163501 | orchestrator | 2025-08-29 15:09:52.163505 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-08-29 15:09:52.163509 | orchestrator | Friday 29 August 2025 15:06:31 +0000 (0:00:03.040) 0:02:23.318 ********* 2025-08-29 15:09:52.163513 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163517 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163521 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163525 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163531 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163537 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163543 | orchestrator | 2025-08-29 15:09:52.163549 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-08-29 15:09:52.163555 | orchestrator | Friday 29 August 2025 15:06:34 +0000 (0:00:02.643) 0:02:25.961 ********* 2025-08-29 15:09:52.163562 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163569 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163575 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163582 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163588 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163595 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163603 | orchestrator | 2025-08-29 15:09:52.163607 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-08-29 15:09:52.163610 | orchestrator | Friday 29 August 2025 15:06:37 +0000 (0:00:03.295) 0:02:29.257 ********* 2025-08-29 15:09:52.163614 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163618 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163622 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163625 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163629 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163637 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163641 | orchestrator | 2025-08-29 15:09:52.163647 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-08-29 15:09:52.163653 | orchestrator | Friday 29 August 2025 15:06:40 +0000 (0:00:03.259) 0:02:32.516 ********* 2025-08-29 15:09:52.163659 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163665 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163671 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163677 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163682 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163689 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163695 | orchestrator | 2025-08-29 15:09:52.163701 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-08-29 15:09:52.163707 | orchestrator | Friday 29 August 2025 15:06:42 +0000 (0:00:02.311) 0:02:34.827 ********* 2025-08-29 15:09:52.163714 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163720 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163726 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163733 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163739 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163745 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163750 | orchestrator | 2025-08-29 15:09:52.163755 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-08-29 15:09:52.163762 | orchestrator | Friday 29 August 2025 15:06:45 +0000 (0:00:02.690) 0:02:37.519 ********* 2025-08-29 15:09:52.163767 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163778 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163786 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163791 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163797 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163803 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163809 | orchestrator | 2025-08-29 15:09:52.163816 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-08-29 15:09:52.163822 | orchestrator | Friday 29 August 2025 15:06:48 +0000 (0:00:02.561) 0:02:40.080 ********* 2025-08-29 15:09:52.163829 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:09:52.163838 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163846 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:09:52.163850 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163854 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:09:52.163857 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163861 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:09:52.163865 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:09:52.163874 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163878 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163882 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:09:52.163886 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163890 | orchestrator | 2025-08-29 15:09:52.163894 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-08-29 15:09:52.163897 | orchestrator | Friday 29 August 2025 15:06:52 +0000 (0:00:04.465) 0:02:44.546 ********* 2025-08-29 15:09:52.163902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.163911 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.163915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.163919 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.163922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:09:52.163926 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.163934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.163938 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.163946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.163954 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.163958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:09:52.163962 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.163966 | orchestrator | 2025-08-29 15:09:52.163970 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-08-29 15:09:52.163974 | orchestrator | Friday 29 August 2025 15:06:55 +0000 (0:00:02.549) 0:02:47.096 ********* 2025-08-29 15:09:52.163978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.163982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.163992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.163997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:09:52.164006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.164011 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:09:52.164015 | orchestrator | 2025-08-29 15:09:52.164018 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:09:52.164022 | orchestrator | Friday 29 August 2025 15:06:58 +0000 (0:00:03.104) 0:02:50.200 ********* 2025-08-29 15:09:52.164026 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:52.164030 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:52.164034 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:52.164037 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:09:52.164042 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:09:52.164045 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:09:52.164049 | orchestrator | 2025-08-29 15:09:52.164053 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-08-29 15:09:52.164058 | orchestrator | Friday 29 August 2025 15:06:58 +0000 (0:00:00.655) 0:02:50.856 ********* 2025-08-29 15:09:52.164064 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:52.164073 | orchestrator | 2025-08-29 15:09:52.164080 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-08-29 15:09:52.164085 | orchestrator | Friday 29 August 2025 15:07:01 +0000 (0:00:02.144) 0:02:53.000 ********* 2025-08-29 15:09:52.164091 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:52.164096 | orchestrator | 2025-08-29 15:09:52.164102 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-08-29 15:09:52.164107 | orchestrator | Friday 29 August 2025 15:07:03 +0000 (0:00:02.336) 0:02:55.336 ********* 2025-08-29 15:09:52.164113 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:52.164119 | orchestrator | 2025-08-29 15:09:52.164124 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:09:52.164130 | orchestrator | Friday 29 August 2025 15:07:49 +0000 (0:00:46.255) 0:03:41.592 ********* 2025-08-29 15:09:52.164142 | orchestrator | 2025-08-29 15:09:52.164152 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:09:52.164158 | orchestrator | Friday 29 August 2025 15:07:49 +0000 (0:00:00.074) 0:03:41.667 ********* 2025-08-29 15:09:52.164164 | orchestrator | 2025-08-29 15:09:52.164170 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:09:52.164175 | orchestrator | Friday 29 August 2025 15:07:50 +0000 (0:00:00.288) 0:03:41.955 ********* 2025-08-29 15:09:52.164181 | orchestrator | 2025-08-29 15:09:52.164188 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:09:52.164194 | orchestrator | Friday 29 August 2025 15:07:50 +0000 (0:00:00.071) 0:03:42.027 ********* 2025-08-29 15:09:52.164200 | orchestrator | 2025-08-29 15:09:52.164210 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:09:52.164216 | orchestrator | Friday 29 August 2025 15:07:50 +0000 (0:00:00.111) 0:03:42.138 ********* 2025-08-29 15:09:52.164222 | orchestrator | 2025-08-29 15:09:52.164228 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:09:52.164235 | orchestrator | Friday 29 August 2025 15:07:50 +0000 (0:00:00.159) 0:03:42.298 ********* 2025-08-29 15:09:52.164241 | orchestrator | 2025-08-29 15:09:52.164249 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-08-29 15:09:52.164256 | orchestrator | Friday 29 August 2025 15:07:50 +0000 (0:00:00.117) 0:03:42.415 ********* 2025-08-29 15:09:52.164263 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:52.164270 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:09:52.164276 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:09:52.164282 | orchestrator | 2025-08-29 15:09:52.164288 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-08-29 15:09:52.164294 | orchestrator | Friday 29 August 2025 15:08:20 +0000 (0:00:30.145) 0:04:12.561 ********* 2025-08-29 15:09:52.164300 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:09:52.164307 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:09:52.164313 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:09:52.164319 | orchestrator | 2025-08-29 15:09:52.164325 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:09:52.164331 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 15:09:52.164358 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 15:09:52.164365 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 15:09:52.164372 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 15:09:52.164378 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 15:09:52.164384 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 15:09:52.164390 | orchestrator | 2025-08-29 15:09:52.164397 | orchestrator | 2025-08-29 15:09:52.164403 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:09:52.164409 | orchestrator | Friday 29 August 2025 15:09:49 +0000 (0:01:28.767) 0:05:41.329 ********* 2025-08-29 15:09:52.164415 | orchestrator | =============================================================================== 2025-08-29 15:09:52.164421 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 88.77s 2025-08-29 15:09:52.164428 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 46.26s 2025-08-29 15:09:52.164434 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.15s 2025-08-29 15:09:52.164447 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.53s 2025-08-29 15:09:52.164454 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 6.91s 2025-08-29 15:09:52.164460 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 6.43s 2025-08-29 15:09:52.164466 | orchestrator | neutron : Copying over config.json files for services ------------------- 6.37s 2025-08-29 15:09:52.164473 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.92s 2025-08-29 15:09:52.164479 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.72s 2025-08-29 15:09:52.164485 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 5.53s 2025-08-29 15:09:52.164491 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 4.71s 2025-08-29 15:09:52.164498 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.60s 2025-08-29 15:09:52.164504 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.57s 2025-08-29 15:09:52.164510 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 4.47s 2025-08-29 15:09:52.164516 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.38s 2025-08-29 15:09:52.164523 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.33s 2025-08-29 15:09:52.164529 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 4.29s 2025-08-29 15:09:52.164536 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.96s 2025-08-29 15:09:52.164547 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.63s 2025-08-29 15:09:52.164554 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.52s 2025-08-29 15:09:52.164560 | orchestrator | 2025-08-29 15:09:52 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:09:52.164567 | orchestrator | 2025-08-29 15:09:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:55.227602 | orchestrator | 2025-08-29 15:09:55 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:55.230366 | orchestrator | 2025-08-29 15:09:55 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:55.232329 | orchestrator | 2025-08-29 15:09:55 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:09:55.235270 | orchestrator | 2025-08-29 15:09:55 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:09:55.235320 | orchestrator | 2025-08-29 15:09:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:58.271144 | orchestrator | 2025-08-29 15:09:58 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:09:58.272591 | orchestrator | 2025-08-29 15:09:58 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:09:58.275987 | orchestrator | 2025-08-29 15:09:58 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:09:58.276049 | orchestrator | 2025-08-29 15:09:58 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:09:58.276061 | orchestrator | 2025-08-29 15:09:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:01.327730 | orchestrator | 2025-08-29 15:10:01 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:10:01.329800 | orchestrator | 2025-08-29 15:10:01 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:01.332312 | orchestrator | 2025-08-29 15:10:01 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:01.334466 | orchestrator | 2025-08-29 15:10:01 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:10:01.334526 | orchestrator | 2025-08-29 15:10:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:04.394440 | orchestrator | 2025-08-29 15:10:04 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:10:04.396388 | orchestrator | 2025-08-29 15:10:04 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:04.397624 | orchestrator | 2025-08-29 15:10:04 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:04.398829 | orchestrator | 2025-08-29 15:10:04 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:10:04.398948 | orchestrator | 2025-08-29 15:10:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:07.460999 | orchestrator | 2025-08-29 15:10:07 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:10:07.461889 | orchestrator | 2025-08-29 15:10:07 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:07.463062 | orchestrator | 2025-08-29 15:10:07 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:07.466413 | orchestrator | 2025-08-29 15:10:07 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:10:07.466469 | orchestrator | 2025-08-29 15:10:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:10.508648 | orchestrator | 2025-08-29 15:10:10 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:10:10.510297 | orchestrator | 2025-08-29 15:10:10 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:10.512350 | orchestrator | 2025-08-29 15:10:10 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:10.513809 | orchestrator | 2025-08-29 15:10:10 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:10:10.513858 | orchestrator | 2025-08-29 15:10:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:13.557751 | orchestrator | 2025-08-29 15:10:13 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:10:13.559150 | orchestrator | 2025-08-29 15:10:13 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:13.562131 | orchestrator | 2025-08-29 15:10:13 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:13.565093 | orchestrator | 2025-08-29 15:10:13 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:10:13.565148 | orchestrator | 2025-08-29 15:10:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:16.634786 | orchestrator | 2025-08-29 15:10:16 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state STARTED 2025-08-29 15:10:16.639525 | orchestrator | 2025-08-29 15:10:16 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:16.642064 | orchestrator | 2025-08-29 15:10:16 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:16.643296 | orchestrator | 2025-08-29 15:10:16 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state STARTED 2025-08-29 15:10:16.644028 | orchestrator | 2025-08-29 15:10:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:19.685632 | orchestrator | 2025-08-29 15:10:19 | INFO  | Task 90d9edb0-d3b1-4cdc-b4bb-644c0267bcfc is in state SUCCESS 2025-08-29 15:10:19.686342 | orchestrator | 2025-08-29 15:10:19 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:19.687980 | orchestrator | 2025-08-29 15:10:19 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:19.689935 | orchestrator | 2025-08-29 15:10:19 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:19.690692 | orchestrator | 2025-08-29 15:10:19 | INFO  | Task 1d9a7e30-a62c-404f-8f51-ff42a93e7951 is in state SUCCESS 2025-08-29 15:10:19.690880 | orchestrator | 2025-08-29 15:10:19.690898 | orchestrator | 2025-08-29 15:10:19.690905 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-08-29 15:10:19.690915 | orchestrator | 2025-08-29 15:10:19.690922 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-08-29 15:10:19.690929 | orchestrator | Friday 29 August 2025 15:04:07 +0000 (0:00:00.142) 0:00:00.142 ********* 2025-08-29 15:10:19.690937 | orchestrator | changed: [localhost] 2025-08-29 15:10:19.690945 | orchestrator | 2025-08-29 15:10:19.690952 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-08-29 15:10:19.690959 | orchestrator | Friday 29 August 2025 15:04:09 +0000 (0:00:01.470) 0:00:01.613 ********* 2025-08-29 15:10:19.690965 | orchestrator | 2025-08-29 15:10:19.690971 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-08-29 15:10:19.690977 | orchestrator | 2025-08-29 15:10:19.690983 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-08-29 15:10:19.690989 | orchestrator | 2025-08-29 15:10:19.690995 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-08-29 15:10:19.691001 | orchestrator | 2025-08-29 15:10:19.691007 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-08-29 15:10:19.691013 | orchestrator | 2025-08-29 15:10:19.691019 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-08-29 15:10:19.691025 | orchestrator | 2025-08-29 15:10:19.691032 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-08-29 15:10:19.691038 | orchestrator | 2025-08-29 15:10:19.691044 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-08-29 15:10:19.691050 | orchestrator | changed: [localhost] 2025-08-29 15:10:19.691056 | orchestrator | 2025-08-29 15:10:19.691062 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-08-29 15:10:19.691068 | orchestrator | Friday 29 August 2025 15:10:03 +0000 (0:05:54.487) 0:05:56.100 ********* 2025-08-29 15:10:19.691074 | orchestrator | changed: [localhost] 2025-08-29 15:10:19.691080 | orchestrator | 2025-08-29 15:10:19.691086 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:10:19.691093 | orchestrator | 2025-08-29 15:10:19.691099 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:10:19.691105 | orchestrator | Friday 29 August 2025 15:10:17 +0000 (0:00:13.625) 0:06:09.726 ********* 2025-08-29 15:10:19.691112 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:10:19.691118 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:10:19.691125 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:10:19.691131 | orchestrator | 2025-08-29 15:10:19.691137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:10:19.691143 | orchestrator | Friday 29 August 2025 15:10:17 +0000 (0:00:00.360) 0:06:10.086 ********* 2025-08-29 15:10:19.691150 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-08-29 15:10:19.691156 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-08-29 15:10:19.691164 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-08-29 15:10:19.691170 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-08-29 15:10:19.691176 | orchestrator | 2025-08-29 15:10:19.691182 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-08-29 15:10:19.691188 | orchestrator | skipping: no hosts matched 2025-08-29 15:10:19.691195 | orchestrator | 2025-08-29 15:10:19.691201 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:10:19.691232 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:10:19.691242 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:10:19.691251 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:10:19.691271 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:10:19.691278 | orchestrator | 2025-08-29 15:10:19.691284 | orchestrator | 2025-08-29 15:10:19.691291 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:10:19.691296 | orchestrator | Friday 29 August 2025 15:10:18 +0000 (0:00:00.591) 0:06:10.677 ********* 2025-08-29 15:10:19.691303 | orchestrator | =============================================================================== 2025-08-29 15:10:19.691309 | orchestrator | Download ironic-agent initramfs --------------------------------------- 354.49s 2025-08-29 15:10:19.691354 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.63s 2025-08-29 15:10:19.691361 | orchestrator | Ensure the destination directory exists --------------------------------- 1.47s 2025-08-29 15:10:19.691366 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-08-29 15:10:19.691372 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-08-29 15:10:19.691378 | orchestrator | 2025-08-29 15:10:19.691384 | orchestrator | 2025-08-29 15:10:19.691390 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:10:19.691395 | orchestrator | 2025-08-29 15:10:19.691399 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:10:19.691402 | orchestrator | Friday 29 August 2025 15:09:40 +0000 (0:00:00.407) 0:00:00.407 ********* 2025-08-29 15:10:19.691406 | orchestrator | ok: [testbed-manager] 2025-08-29 15:10:19.691410 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:10:19.691413 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:10:19.691417 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:10:19.691421 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:10:19.691425 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:10:19.691428 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:10:19.691432 | orchestrator | 2025-08-29 15:10:19.691436 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:10:19.691440 | orchestrator | Friday 29 August 2025 15:09:41 +0000 (0:00:01.062) 0:00:01.469 ********* 2025-08-29 15:10:19.691454 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-08-29 15:10:19.691458 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-08-29 15:10:19.691462 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-08-29 15:10:19.691466 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-08-29 15:10:19.691469 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-08-29 15:10:19.691473 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-08-29 15:10:19.691477 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-08-29 15:10:19.691480 | orchestrator | 2025-08-29 15:10:19.691484 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 15:10:19.691488 | orchestrator | 2025-08-29 15:10:19.691491 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-08-29 15:10:19.691495 | orchestrator | Friday 29 August 2025 15:09:42 +0000 (0:00:00.847) 0:00:02.317 ********* 2025-08-29 15:10:19.691499 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:10:19.691506 | orchestrator | 2025-08-29 15:10:19.691511 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-08-29 15:10:19.691526 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:01.964) 0:00:04.281 ********* 2025-08-29 15:10:19.691532 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-08-29 15:10:19.691538 | orchestrator | 2025-08-29 15:10:19.691543 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-08-29 15:10:19.691549 | orchestrator | Friday 29 August 2025 15:09:47 +0000 (0:00:03.585) 0:00:07.867 ********* 2025-08-29 15:10:19.691555 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-08-29 15:10:19.691561 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-08-29 15:10:19.691568 | orchestrator | 2025-08-29 15:10:19.691575 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-08-29 15:10:19.691581 | orchestrator | Friday 29 August 2025 15:09:55 +0000 (0:00:07.794) 0:00:15.661 ********* 2025-08-29 15:10:19.691588 | orchestrator | ok: [testbed-manager] => (item=service) 2025-08-29 15:10:19.691594 | orchestrator | 2025-08-29 15:10:19.691600 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-08-29 15:10:19.691606 | orchestrator | Friday 29 August 2025 15:09:58 +0000 (0:00:03.102) 0:00:18.763 ********* 2025-08-29 15:10:19.691612 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:10:19.691619 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-08-29 15:10:19.691625 | orchestrator | 2025-08-29 15:10:19.691632 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-08-29 15:10:19.691638 | orchestrator | Friday 29 August 2025 15:10:03 +0000 (0:00:04.652) 0:00:23.416 ********* 2025-08-29 15:10:19.691644 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-08-29 15:10:19.691651 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-08-29 15:10:19.691658 | orchestrator | 2025-08-29 15:10:19.691664 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-08-29 15:10:19.691671 | orchestrator | Friday 29 August 2025 15:10:11 +0000 (0:00:08.118) 0:00:31.534 ********* 2025-08-29 15:10:19.691676 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-08-29 15:10:19.691682 | orchestrator | 2025-08-29 15:10:19.691688 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:10:19.691701 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:10:19.691708 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:10:19.691715 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:10:19.691719 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:10:19.691723 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:10:19.691726 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:10:19.691730 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:10:19.691734 | orchestrator | 2025-08-29 15:10:19.691738 | orchestrator | 2025-08-29 15:10:19.691741 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:10:19.691745 | orchestrator | Friday 29 August 2025 15:10:17 +0000 (0:00:05.733) 0:00:37.268 ********* 2025-08-29 15:10:19.691749 | orchestrator | =============================================================================== 2025-08-29 15:10:19.691758 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 8.12s 2025-08-29 15:10:19.691762 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.79s 2025-08-29 15:10:19.691766 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.73s 2025-08-29 15:10:19.691774 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.65s 2025-08-29 15:10:19.691779 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.59s 2025-08-29 15:10:19.691782 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.10s 2025-08-29 15:10:19.691786 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.96s 2025-08-29 15:10:19.691790 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.06s 2025-08-29 15:10:19.691793 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2025-08-29 15:10:19.691797 | orchestrator | 2025-08-29 15:10:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:22.740026 | orchestrator | 2025-08-29 15:10:22 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:22.744803 | orchestrator | 2025-08-29 15:10:22 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:10:22.745065 | orchestrator | 2025-08-29 15:10:22 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:22.746088 | orchestrator | 2025-08-29 15:10:22 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:22.746133 | orchestrator | 2025-08-29 15:10:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:25.796994 | orchestrator | 2025-08-29 15:10:25 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:25.800117 | orchestrator | 2025-08-29 15:10:25 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:10:25.802047 | orchestrator | 2025-08-29 15:10:25 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:25.803670 | orchestrator | 2025-08-29 15:10:25 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:25.803736 | orchestrator | 2025-08-29 15:10:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:28.845691 | orchestrator | 2025-08-29 15:10:28 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:28.846212 | orchestrator | 2025-08-29 15:10:28 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:10:28.847235 | orchestrator | 2025-08-29 15:10:28 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:28.848273 | orchestrator | 2025-08-29 15:10:28 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:28.848295 | orchestrator | 2025-08-29 15:10:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:31.886751 | orchestrator | 2025-08-29 15:10:31 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:31.886835 | orchestrator | 2025-08-29 15:10:31 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:10:31.888432 | orchestrator | 2025-08-29 15:10:31 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:31.889929 | orchestrator | 2025-08-29 15:10:31 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:31.889961 | orchestrator | 2025-08-29 15:10:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:34.922964 | orchestrator | 2025-08-29 15:10:34 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:34.923984 | orchestrator | 2025-08-29 15:10:34 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:10:34.925230 | orchestrator | 2025-08-29 15:10:34 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:34.926595 | orchestrator | 2025-08-29 15:10:34 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:34.926654 | orchestrator | 2025-08-29 15:10:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:37.964198 | orchestrator | 2025-08-29 15:10:37 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:37.966375 | orchestrator | 2025-08-29 15:10:37 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:10:37.968481 | orchestrator | 2025-08-29 15:10:37 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:37.970830 | orchestrator | 2025-08-29 15:10:37 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:37.970872 | orchestrator | 2025-08-29 15:10:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:41.018671 | orchestrator | 2025-08-29 15:10:41 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:41.020928 | orchestrator | 2025-08-29 15:10:41 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:10:41.021748 | orchestrator | 2025-08-29 15:10:41 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:41.022955 | orchestrator | 2025-08-29 15:10:41 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:41.022995 | orchestrator | 2025-08-29 15:10:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:44.060274 | orchestrator | 2025-08-29 15:10:44 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:44.061356 | orchestrator | 2025-08-29 15:10:44 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:10:44.062843 | orchestrator | 2025-08-29 15:10:44 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:44.064329 | orchestrator | 2025-08-29 15:10:44 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:44.064416 | orchestrator | 2025-08-29 15:10:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:47.112635 | orchestrator | 2025-08-29 15:10:47 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:47.114528 | orchestrator | 2025-08-29 15:10:47 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:10:47.115400 | orchestrator | 2025-08-29 15:10:47 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:47.116563 | orchestrator | 2025-08-29 15:10:47 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:47.116579 | orchestrator | 2025-08-29 15:10:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:50.164532 | orchestrator | 2025-08-29 15:10:50 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:50.169342 | orchestrator | 2025-08-29 15:10:50 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:10:50.169392 | orchestrator | 2025-08-29 15:10:50 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:50.174075 | orchestrator | 2025-08-29 15:10:50 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:50.174157 | orchestrator | 2025-08-29 15:10:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:53.230510 | orchestrator | 2025-08-29 15:10:53 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:53.232936 | orchestrator | 2025-08-29 15:10:53 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:10:53.233995 | orchestrator | 2025-08-29 15:10:53 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:53.235765 | orchestrator | 2025-08-29 15:10:53 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:53.235814 | orchestrator | 2025-08-29 15:10:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:56.278586 | orchestrator | 2025-08-29 15:10:56 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:56.278895 | orchestrator | 2025-08-29 15:10:56 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:10:56.282689 | orchestrator | 2025-08-29 15:10:56 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:56.283322 | orchestrator | 2025-08-29 15:10:56 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:56.283802 | orchestrator | 2025-08-29 15:10:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:59.326624 | orchestrator | 2025-08-29 15:10:59 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:10:59.327055 | orchestrator | 2025-08-29 15:10:59 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:10:59.328140 | orchestrator | 2025-08-29 15:10:59 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:10:59.329111 | orchestrator | 2025-08-29 15:10:59 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:10:59.329185 | orchestrator | 2025-08-29 15:10:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:02.391647 | orchestrator | 2025-08-29 15:11:02 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:02.391831 | orchestrator | 2025-08-29 15:11:02 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:02.392908 | orchestrator | 2025-08-29 15:11:02 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:02.395377 | orchestrator | 2025-08-29 15:11:02 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:02.395449 | orchestrator | 2025-08-29 15:11:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:05.435814 | orchestrator | 2025-08-29 15:11:05 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:05.439352 | orchestrator | 2025-08-29 15:11:05 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:05.441469 | orchestrator | 2025-08-29 15:11:05 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:05.443811 | orchestrator | 2025-08-29 15:11:05 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:05.444557 | orchestrator | 2025-08-29 15:11:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:08.505966 | orchestrator | 2025-08-29 15:11:08 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:08.506768 | orchestrator | 2025-08-29 15:11:08 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:08.507914 | orchestrator | 2025-08-29 15:11:08 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:08.509113 | orchestrator | 2025-08-29 15:11:08 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:08.509179 | orchestrator | 2025-08-29 15:11:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:11.659800 | orchestrator | 2025-08-29 15:11:11 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:11.660631 | orchestrator | 2025-08-29 15:11:11 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:11.661644 | orchestrator | 2025-08-29 15:11:11 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:11.663542 | orchestrator | 2025-08-29 15:11:11 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:11.663601 | orchestrator | 2025-08-29 15:11:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:14.701758 | orchestrator | 2025-08-29 15:11:14 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:14.703196 | orchestrator | 2025-08-29 15:11:14 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:14.705211 | orchestrator | 2025-08-29 15:11:14 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:14.706091 | orchestrator | 2025-08-29 15:11:14 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:14.706173 | orchestrator | 2025-08-29 15:11:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:17.742989 | orchestrator | 2025-08-29 15:11:17 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:17.743086 | orchestrator | 2025-08-29 15:11:17 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:17.743093 | orchestrator | 2025-08-29 15:11:17 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:17.743986 | orchestrator | 2025-08-29 15:11:17 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:17.744043 | orchestrator | 2025-08-29 15:11:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:20.797019 | orchestrator | 2025-08-29 15:11:20 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:20.798581 | orchestrator | 2025-08-29 15:11:20 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:20.800264 | orchestrator | 2025-08-29 15:11:20 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:20.803101 | orchestrator | 2025-08-29 15:11:20 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:20.804114 | orchestrator | 2025-08-29 15:11:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:23.853903 | orchestrator | 2025-08-29 15:11:23 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:23.854974 | orchestrator | 2025-08-29 15:11:23 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:23.856534 | orchestrator | 2025-08-29 15:11:23 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:23.858153 | orchestrator | 2025-08-29 15:11:23 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:23.858198 | orchestrator | 2025-08-29 15:11:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:26.895530 | orchestrator | 2025-08-29 15:11:26 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:26.896544 | orchestrator | 2025-08-29 15:11:26 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:26.898639 | orchestrator | 2025-08-29 15:11:26 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:26.900878 | orchestrator | 2025-08-29 15:11:26 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:26.900960 | orchestrator | 2025-08-29 15:11:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:29.933905 | orchestrator | 2025-08-29 15:11:29 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:29.934790 | orchestrator | 2025-08-29 15:11:29 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:29.935679 | orchestrator | 2025-08-29 15:11:29 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:29.937011 | orchestrator | 2025-08-29 15:11:29 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:29.937051 | orchestrator | 2025-08-29 15:11:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:32.970825 | orchestrator | 2025-08-29 15:11:32 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:32.971530 | orchestrator | 2025-08-29 15:11:32 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:32.973425 | orchestrator | 2025-08-29 15:11:32 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:32.974649 | orchestrator | 2025-08-29 15:11:32 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:32.974891 | orchestrator | 2025-08-29 15:11:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:36.011228 | orchestrator | 2025-08-29 15:11:36 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:36.012117 | orchestrator | 2025-08-29 15:11:36 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:36.012565 | orchestrator | 2025-08-29 15:11:36 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:36.014130 | orchestrator | 2025-08-29 15:11:36 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:36.014206 | orchestrator | 2025-08-29 15:11:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:39.090789 | orchestrator | 2025-08-29 15:11:39 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:39.094325 | orchestrator | 2025-08-29 15:11:39 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:39.094812 | orchestrator | 2025-08-29 15:11:39 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:39.095847 | orchestrator | 2025-08-29 15:11:39 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:39.095912 | orchestrator | 2025-08-29 15:11:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:42.263313 | orchestrator | 2025-08-29 15:11:42 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state STARTED 2025-08-29 15:11:42.266401 | orchestrator | 2025-08-29 15:11:42 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:42.267446 | orchestrator | 2025-08-29 15:11:42 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:42.268487 | orchestrator | 2025-08-29 15:11:42 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:42.268541 | orchestrator | 2025-08-29 15:11:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:45.314087 | orchestrator | 2025-08-29 15:11:45 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:11:45.318071 | orchestrator | 2025-08-29 15:11:45 | INFO  | Task 60eaa2a7-8a08-493b-9ae1-94f6587cdcd6 is in state SUCCESS 2025-08-29 15:11:45.319963 | orchestrator | 2025-08-29 15:11:45.320020 | orchestrator | 2025-08-29 15:11:45.320026 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:11:45.320031 | orchestrator | 2025-08-29 15:11:45.320035 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:11:45.320040 | orchestrator | Friday 29 August 2025 15:07:52 +0000 (0:00:00.407) 0:00:00.407 ********* 2025-08-29 15:11:45.320044 | orchestrator | ok: [testbed-manager] 2025-08-29 15:11:45.320049 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:11:45.320053 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:11:45.320057 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:11:45.320061 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:11:45.320065 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:11:45.320069 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:11:45.320072 | orchestrator | 2025-08-29 15:11:45.320076 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:11:45.320080 | orchestrator | Friday 29 August 2025 15:07:53 +0000 (0:00:01.485) 0:00:01.892 ********* 2025-08-29 15:11:45.320085 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-08-29 15:11:45.320089 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-08-29 15:11:45.320093 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-08-29 15:11:45.320096 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-08-29 15:11:45.320100 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-08-29 15:11:45.320104 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-08-29 15:11:45.320147 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-08-29 15:11:45.320152 | orchestrator | 2025-08-29 15:11:45.320156 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-08-29 15:11:45.320160 | orchestrator | 2025-08-29 15:11:45.320165 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 15:11:45.320170 | orchestrator | Friday 29 August 2025 15:07:54 +0000 (0:00:01.089) 0:00:02.981 ********* 2025-08-29 15:11:45.320178 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:11:45.320186 | orchestrator | 2025-08-29 15:11:45.320193 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-08-29 15:11:45.320200 | orchestrator | Friday 29 August 2025 15:07:57 +0000 (0:00:02.385) 0:00:05.366 ********* 2025-08-29 15:11:45.320208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320375 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:11:45.320383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320387 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320406 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320423 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320441 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320496 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320540 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:11:45.320554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320561 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320567 | orchestrator | 2025-08-29 15:11:45.320573 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 15:11:45.320580 | orchestrator | Friday 29 August 2025 15:08:01 +0000 (0:00:04.456) 0:00:09.823 ********* 2025-08-29 15:11:45.320586 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:11:45.320593 | orchestrator | 2025-08-29 15:11:45.320599 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-08-29 15:11:45.320606 | orchestrator | Friday 29 August 2025 15:08:03 +0000 (0:00:01.735) 0:00:11.558 ********* 2025-08-29 15:11:45.320612 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:11:45.320628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320791 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320812 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.320826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320876 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320880 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320907 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320912 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320924 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:11:45.320932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.320947 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.320951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.321119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.321131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.321136 | orchestrator | 2025-08-29 15:11:45.321140 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-08-29 15:11:45.321151 | orchestrator | Friday 29 August 2025 15:08:09 +0000 (0:00:05.968) 0:00:17.526 ********* 2025-08-29 15:11:45.321155 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 15:11:45.321160 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321168 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321173 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 15:11:45.321182 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321212 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:11:45.321217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321239 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:45.321301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321318 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:45.321323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321433 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:45.321437 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:45.321441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321491 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:45.321495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321529 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:45.321533 | orchestrator | 2025-08-29 15:11:45.321537 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-08-29 15:11:45.321542 | orchestrator | Friday 29 August 2025 15:08:11 +0000 (0:00:01.964) 0:00:19.491 ********* 2025-08-29 15:11:45.321546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321569 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 15:11:45.321581 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321629 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321634 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 15:11:45.321639 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321643 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:45.321647 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:11:45.321654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321779 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:45.321783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:11:45.321804 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:45.321813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321826 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:45.321830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:11:45.321841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.321859 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:45.322341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:11:45.322440 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:45.322512 | orchestrator | 2025-08-29 15:11:45.322521 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-08-29 15:11:45.322527 | orchestrator | Friday 29 August 2025 15:08:13 +0000 (0:00:02.502) 0:00:21.994 ********* 2025-08-29 15:11:45.322545 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:11:45.322555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.322562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.322584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.322601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.322607 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.322621 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.322627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.322634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.322641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.322648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.322694 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.322707 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.322711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.322720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.322724 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.322729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.322733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.322740 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:11:45.322750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.322754 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.322762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.322766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.322770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.322774 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.322778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.322788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.322792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.322796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.322801 | orchestrator | 2025-08-29 15:11:45.322805 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-08-29 15:11:45.322810 | orchestrator | Friday 29 August 2025 15:08:21 +0000 (0:00:07.341) 0:00:29.335 ********* 2025-08-29 15:11:45.322824 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:11:45.322828 | orchestrator | 2025-08-29 15:11:45.322832 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-08-29 15:11:45.322839 | orchestrator | Friday 29 August 2025 15:08:22 +0000 (0:00:01.248) 0:00:30.584 ********* 2025-08-29 15:11:45.322844 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096826, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3367217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322850 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096826, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3367217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322854 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096826, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3367217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322862 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096826, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3367217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322870 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096843, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3410623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322875 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096826, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3367217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322882 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096826, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3367217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.322887 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096826, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3367217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322892 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096843, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3410623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322896 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096843, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3410623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322906 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096843, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3410623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322913 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096823, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.335062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322917 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096823, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.335062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322924 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096843, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3410623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322928 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096843, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3410623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322933 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096823, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.335062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322940 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096823, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.335062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322945 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096843, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3410623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.322952 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096837, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3397226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322956 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096837, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3397226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322963 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096823, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.335062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322967 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096823, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.335062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322972 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096837, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3397226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322979 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096837, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3397226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322983 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096820, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.334062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322990 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096837, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3397226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322994 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096820, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.334062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.322999 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096829, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323013 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096823, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.335062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.323018 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096820, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.334062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323025 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096820, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.334062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323029 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096835, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323036 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096837, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3397226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323040 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096829, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323044 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096820, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.334062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323051 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096829, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323059 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096820, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.334062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323063 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096829, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323067 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096830, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323073 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096829, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323077 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096835, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323081 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096835, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323088 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096837, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3397226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.323095 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096835, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323099 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096835, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323103 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096825, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3360622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323109 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096829, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323114 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096830, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323117 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096830, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323124 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096830, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323134 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096841, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3409185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323138 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096830, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323142 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096825, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3360622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323159 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096825, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3360622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323164 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096835, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323168 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096820, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.334062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.323176 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096830, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323184 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096825, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3360622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323188 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096841, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3409185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323192 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096841, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3409185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323199 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096815, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.332062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323203 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096825, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3360622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323207 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096841, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3409185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323217 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096825, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3360622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323221 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096841, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3409185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323225 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096815, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.332062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323229 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096853, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3430622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323235 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096829, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.323239 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096815, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.332062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323243 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096853, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3430622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323290 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096841, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3409185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323295 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096815, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.332062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323299 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096815, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.332062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323303 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096840, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.340428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323311 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096815, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.332062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323317 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096853, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3430622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323325 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096840, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.340428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323342 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096853, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3430622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323348 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096835, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.323353 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096853, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3430622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323359 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096822, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3345397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323365 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096822, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3345397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323374 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096853, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3430622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323423 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096840, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.340428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323707 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096816, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3334463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323730 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096840, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.340428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323738 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096822, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3345397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323745 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096840, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.340428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323752 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096840, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.340428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323766 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096833, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323782 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096830, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3380623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.323796 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096816, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3334463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323804 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096822, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3345397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323811 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096822, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3345397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323818 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096816, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3334463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323825 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096816, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3334463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323869 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096822, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3345397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323886 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096832, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3388484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323898 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096833, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323905 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096833, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323913 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096816, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3334463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323920 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096816, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3334463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323927 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096833, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323939 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096851, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3424845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323951 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:45.323959 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096832, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3388484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323969 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096832, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3388484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323977 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096833, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323985 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096832, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3388484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.323992 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096833, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.324000 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096851, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3424845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.324011 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:45.324026 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096851, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3424845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.324056 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096832, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3388484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.324063 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:45.324074 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096825, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3360622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.324082 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096832, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3388484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.324089 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096851, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3424845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.324096 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:45.324103 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096851, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3424845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.324111 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:45.324122 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096851, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3424845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:11:45.324134 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:45.324141 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096841, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3409185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.324148 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096815, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.332062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.324160 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096853, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3430622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.324168 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096840, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.340428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.324175 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096822, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3345397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.324182 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096816, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3334463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.324198 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096833, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3390076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.324206 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096832, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3388484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.324213 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096851, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3424845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:11:45.324220 | orchestrator | 2025-08-29 15:11:45.324228 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-08-29 15:11:45.324236 | orchestrator | Friday 29 August 2025 15:08:58 +0000 (0:00:36.540) 0:01:07.124 ********* 2025-08-29 15:11:45.324243 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:11:45.324250 | orchestrator | 2025-08-29 15:11:45.324323 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-08-29 15:11:45.324330 | orchestrator | Friday 29 August 2025 15:08:59 +0000 (0:00:00.862) 0:01:07.986 ********* 2025-08-29 15:11:45.324336 | orchestrator | [WARNING]: Skipped 2025-08-29 15:11:45.324342 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324349 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-08-29 15:11:45.324355 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324361 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-08-29 15:11:45.324367 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:11:45.324414 | orchestrator | [WARNING]: Skipped 2025-08-29 15:11:45.324420 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324425 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-08-29 15:11:45.324431 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324437 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-08-29 15:11:45.324443 | orchestrator | [WARNING]: Skipped 2025-08-29 15:11:45.324448 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324454 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-08-29 15:11:45.324461 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324466 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-08-29 15:11:45.324478 | orchestrator | [WARNING]: Skipped 2025-08-29 15:11:45.324484 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324490 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-08-29 15:11:45.324496 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324502 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-08-29 15:11:45.324508 | orchestrator | [WARNING]: Skipped 2025-08-29 15:11:45.324513 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324519 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-08-29 15:11:45.324525 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324531 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-08-29 15:11:45.324537 | orchestrator | [WARNING]: Skipped 2025-08-29 15:11:45.324543 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324549 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-08-29 15:11:45.324555 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324561 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-08-29 15:11:45.324566 | orchestrator | [WARNING]: Skipped 2025-08-29 15:11:45.324572 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324579 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-08-29 15:11:45.324585 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:11:45.324590 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-08-29 15:11:45.324596 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:11:45.324601 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 15:11:45.324607 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 15:11:45.324612 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:11:45.324618 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 15:11:45.324629 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 15:11:45.324635 | orchestrator | 2025-08-29 15:11:45.324641 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-08-29 15:11:45.324646 | orchestrator | Friday 29 August 2025 15:09:01 +0000 (0:00:01.988) 0:01:09.975 ********* 2025-08-29 15:11:45.324652 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:11:45.324660 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:45.324666 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:11:45.324671 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:45.324677 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:11:45.324683 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:45.324690 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:11:45.324696 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:45.324702 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:11:45.324707 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:45.324713 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:11:45.324719 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:45.324729 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-08-29 15:11:45.324736 | orchestrator | 2025-08-29 15:11:45.324742 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-08-29 15:11:45.324748 | orchestrator | Friday 29 August 2025 15:09:19 +0000 (0:00:17.863) 0:01:27.838 ********* 2025-08-29 15:11:45.324761 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:11:45.324774 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:45.324781 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:11:45.324787 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:45.324793 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:11:45.324798 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:45.324803 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:11:45.324809 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:45.324815 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:11:45.324820 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:45.324826 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:11:45.324832 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:45.324837 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-08-29 15:11:45.324843 | orchestrator | 2025-08-29 15:11:45.324849 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-08-29 15:11:45.324855 | orchestrator | Friday 29 August 2025 15:09:23 +0000 (0:00:03.939) 0:01:31.778 ********* 2025-08-29 15:11:45.324862 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:11:45.324869 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:11:45.324875 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:11:45.324881 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:45.324887 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:45.324893 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:45.324899 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:11:45.324906 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:45.324912 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:11:45.324918 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:45.324925 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-08-29 15:11:45.324931 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:11:45.324938 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:45.324945 | orchestrator | 2025-08-29 15:11:45.324951 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-08-29 15:11:45.324955 | orchestrator | Friday 29 August 2025 15:09:26 +0000 (0:00:02.560) 0:01:34.339 ********* 2025-08-29 15:11:45.324959 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:11:45.324963 | orchestrator | 2025-08-29 15:11:45.324967 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-08-29 15:11:45.324970 | orchestrator | Friday 29 August 2025 15:09:26 +0000 (0:00:00.752) 0:01:35.092 ********* 2025-08-29 15:11:45.324974 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:11:45.324982 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:45.324986 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:45.324990 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:45.324994 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:45.325005 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:45.325009 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:45.325014 | orchestrator | 2025-08-29 15:11:45.325018 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-08-29 15:11:45.325022 | orchestrator | Friday 29 August 2025 15:09:27 +0000 (0:00:00.700) 0:01:35.792 ********* 2025-08-29 15:11:45.325026 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:11:45.325031 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:45.325035 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:45.325039 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:45.325043 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:45.325047 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:11:45.325051 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:11:45.325055 | orchestrator | 2025-08-29 15:11:45.325060 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-08-29 15:11:45.325064 | orchestrator | Friday 29 August 2025 15:09:30 +0000 (0:00:02.484) 0:01:38.277 ********* 2025-08-29 15:11:45.325068 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:11:45.325073 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:11:45.325077 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:11:45.325081 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:11:45.325086 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:45.325090 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:45.325094 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:11:45.325098 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:45.325107 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:11:45.325112 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:45.325116 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:11:45.325120 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:45.325124 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:11:45.325128 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:45.325133 | orchestrator | 2025-08-29 15:11:45.325137 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-08-29 15:11:45.325141 | orchestrator | Friday 29 August 2025 15:09:32 +0000 (0:00:02.100) 0:01:40.378 ********* 2025-08-29 15:11:45.325145 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:11:45.325149 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:45.325154 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:11:45.325158 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:45.325162 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:11:45.325166 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:45.325171 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:11:45.325175 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:45.325179 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:11:45.325183 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:11:45.325187 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:45.325191 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:45.325195 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-08-29 15:11:45.325204 | orchestrator | 2025-08-29 15:11:45.325208 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-08-29 15:11:45.325213 | orchestrator | Friday 29 August 2025 15:09:34 +0000 (0:00:02.023) 0:01:42.401 ********* 2025-08-29 15:11:45.325217 | orchestrator | [WARNING]: Skipped 2025-08-29 15:11:45.325221 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-08-29 15:11:45.325225 | orchestrator | due to this access issue: 2025-08-29 15:11:45.325230 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-08-29 15:11:45.325235 | orchestrator | not a directory 2025-08-29 15:11:45.325239 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:11:45.325243 | orchestrator | 2025-08-29 15:11:45.325247 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-08-29 15:11:45.325276 | orchestrator | Friday 29 August 2025 15:09:35 +0000 (0:00:01.583) 0:01:43.985 ********* 2025-08-29 15:11:45.325284 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:11:45.325290 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:45.325294 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:45.325298 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:45.325302 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:45.325306 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:45.325310 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:45.325314 | orchestrator | 2025-08-29 15:11:45.325319 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-08-29 15:11:45.325323 | orchestrator | Friday 29 August 2025 15:09:37 +0000 (0:00:01.193) 0:01:45.178 ********* 2025-08-29 15:11:45.325331 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:11:45.325335 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:45.325340 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:45.325344 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:45.325348 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:45.325352 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:45.325357 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:45.325360 | orchestrator | 2025-08-29 15:11:45.325364 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-08-29 15:11:45.325368 | orchestrator | Friday 29 August 2025 15:09:37 +0000 (0:00:00.743) 0:01:45.921 ********* 2025-08-29 15:11:45.325373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.325382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.325387 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:11:45.325396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.325400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.325404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.325411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.325416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.325420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.325428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.325435 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:11:45.325439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.325443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.325447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.325456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.325460 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.325464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.325472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.325479 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.325483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.325487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.325491 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.325497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.325502 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:11:45.325509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.325517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:11:45.325521 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.325525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.325529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:11:45.325533 | orchestrator | 2025-08-29 15:11:45.325537 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-08-29 15:11:45.325540 | orchestrator | Friday 29 August 2025 15:09:42 +0000 (0:00:05.115) 0:01:51.037 ********* 2025-08-29 15:11:45.325544 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 15:11:45.325548 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:11:45.325552 | orchestrator | 2025-08-29 15:11:45.325556 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:11:45.325560 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:01.345) 0:01:52.383 ********* 2025-08-29 15:11:45.325564 | orchestrator | 2025-08-29 15:11:45.325568 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:11:45.325596 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:00.073) 0:01:52.456 ********* 2025-08-29 15:11:45.325600 | orchestrator | 2025-08-29 15:11:45.325604 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:11:45.325608 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:00.068) 0:01:52.525 ********* 2025-08-29 15:11:45.325612 | orchestrator | 2025-08-29 15:11:45.325615 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:11:45.325619 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:00.066) 0:01:52.591 ********* 2025-08-29 15:11:45.325623 | orchestrator | 2025-08-29 15:11:45.325630 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:11:45.325634 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:00.297) 0:01:52.889 ********* 2025-08-29 15:11:45.325638 | orchestrator | 2025-08-29 15:11:45.325642 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:11:45.325645 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:00.074) 0:01:52.963 ********* 2025-08-29 15:11:45.325649 | orchestrator | 2025-08-29 15:11:45.325653 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:11:45.325656 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:00.067) 0:01:53.031 ********* 2025-08-29 15:11:45.325660 | orchestrator | 2025-08-29 15:11:45.325664 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-08-29 15:11:45.325668 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:00.092) 0:01:53.123 ********* 2025-08-29 15:11:45.325671 | orchestrator | changed: [testbed-manager] 2025-08-29 15:11:45.325675 | orchestrator | 2025-08-29 15:11:45.325679 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-08-29 15:11:45.325685 | orchestrator | Friday 29 August 2025 15:10:05 +0000 (0:00:20.435) 0:02:13.559 ********* 2025-08-29 15:11:45.325689 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:11:45.325693 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:11:45.325696 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:11:45.325700 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:11:45.325704 | orchestrator | changed: [testbed-manager] 2025-08-29 15:11:45.325708 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:45.325712 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:11:45.325715 | orchestrator | 2025-08-29 15:11:45.325719 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-08-29 15:11:45.325723 | orchestrator | Friday 29 August 2025 15:10:21 +0000 (0:00:15.728) 0:02:29.288 ********* 2025-08-29 15:11:45.325727 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:45.325730 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:11:45.325734 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:11:45.325738 | orchestrator | 2025-08-29 15:11:45.325741 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-08-29 15:11:45.325745 | orchestrator | Friday 29 August 2025 15:10:34 +0000 (0:00:13.046) 0:02:42.334 ********* 2025-08-29 15:11:45.325749 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:45.325753 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:11:45.325756 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:11:45.325760 | orchestrator | 2025-08-29 15:11:45.325764 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-08-29 15:11:45.325768 | orchestrator | Friday 29 August 2025 15:10:40 +0000 (0:00:06.382) 0:02:48.716 ********* 2025-08-29 15:11:45.325771 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:11:45.325775 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:11:45.325779 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:45.325782 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:11:45.325786 | orchestrator | changed: [testbed-manager] 2025-08-29 15:11:45.325790 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:11:45.325794 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:11:45.325797 | orchestrator | 2025-08-29 15:11:45.325801 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-08-29 15:11:45.325805 | orchestrator | Friday 29 August 2025 15:10:59 +0000 (0:00:18.836) 0:03:07.553 ********* 2025-08-29 15:11:45.325809 | orchestrator | changed: [testbed-manager] 2025-08-29 15:11:45.325812 | orchestrator | 2025-08-29 15:11:45.325816 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-08-29 15:11:45.325820 | orchestrator | Friday 29 August 2025 15:11:09 +0000 (0:00:10.221) 0:03:17.774 ********* 2025-08-29 15:11:45.325824 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:11:45.325828 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:45.325831 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:11:45.325838 | orchestrator | 2025-08-29 15:11:45.325841 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-08-29 15:11:45.325845 | orchestrator | Friday 29 August 2025 15:11:22 +0000 (0:00:12.701) 0:03:30.476 ********* 2025-08-29 15:11:45.325849 | orchestrator | changed: [testbed-manager] 2025-08-29 15:11:45.325853 | orchestrator | 2025-08-29 15:11:45.325857 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-08-29 15:11:45.325860 | orchestrator | Friday 29 August 2025 15:11:28 +0000 (0:00:06.323) 0:03:36.799 ********* 2025-08-29 15:11:45.325864 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:11:45.325868 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:11:45.325871 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:11:45.325875 | orchestrator | 2025-08-29 15:11:45.325879 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:11:45.325883 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:11:45.325888 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:11:45.325895 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:11:45.325899 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:11:45.325903 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:11:45.325907 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:11:45.325910 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:11:45.325914 | orchestrator | 2025-08-29 15:11:45.325918 | orchestrator | 2025-08-29 15:11:45.325922 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:11:45.325954 | orchestrator | Friday 29 August 2025 15:11:42 +0000 (0:00:14.152) 0:03:50.952 ********* 2025-08-29 15:11:45.325958 | orchestrator | =============================================================================== 2025-08-29 15:11:45.325962 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 36.54s 2025-08-29 15:11:45.325966 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.44s 2025-08-29 15:11:45.325969 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 18.84s 2025-08-29 15:11:45.325973 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.86s 2025-08-29 15:11:45.325977 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.73s 2025-08-29 15:11:45.325983 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 14.15s 2025-08-29 15:11:45.325987 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 13.05s 2025-08-29 15:11:45.325991 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.70s 2025-08-29 15:11:45.325994 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 10.22s 2025-08-29 15:11:45.325998 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.34s 2025-08-29 15:11:45.326001 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.38s 2025-08-29 15:11:45.326005 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.32s 2025-08-29 15:11:45.326009 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.97s 2025-08-29 15:11:45.326047 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.12s 2025-08-29 15:11:45.326053 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.46s 2025-08-29 15:11:45.326056 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.94s 2025-08-29 15:11:45.326060 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.56s 2025-08-29 15:11:45.326064 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.50s 2025-08-29 15:11:45.326068 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.48s 2025-08-29 15:11:45.326072 | orchestrator | prometheus : include_tasks ---------------------------------------------- 2.39s 2025-08-29 15:11:45.326141 | orchestrator | 2025-08-29 15:11:45 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:45.328884 | orchestrator | 2025-08-29 15:11:45 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:45.338010 | orchestrator | 2025-08-29 15:11:45 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:45.338108 | orchestrator | 2025-08-29 15:11:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:48.376329 | orchestrator | 2025-08-29 15:11:48 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:11:48.377130 | orchestrator | 2025-08-29 15:11:48 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:48.378424 | orchestrator | 2025-08-29 15:11:48 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:48.379465 | orchestrator | 2025-08-29 15:11:48 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:48.379521 | orchestrator | 2025-08-29 15:11:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:51.426460 | orchestrator | 2025-08-29 15:11:51 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:11:51.426690 | orchestrator | 2025-08-29 15:11:51 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:51.427293 | orchestrator | 2025-08-29 15:11:51 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:51.428191 | orchestrator | 2025-08-29 15:11:51 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:51.428348 | orchestrator | 2025-08-29 15:11:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:54.477135 | orchestrator | 2025-08-29 15:11:54 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:11:54.478066 | orchestrator | 2025-08-29 15:11:54 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:54.478853 | orchestrator | 2025-08-29 15:11:54 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:54.480525 | orchestrator | 2025-08-29 15:11:54 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:54.480577 | orchestrator | 2025-08-29 15:11:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:57.510240 | orchestrator | 2025-08-29 15:11:57 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:11:57.510600 | orchestrator | 2025-08-29 15:11:57 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:11:57.511838 | orchestrator | 2025-08-29 15:11:57 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:11:57.513329 | orchestrator | 2025-08-29 15:11:57 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:11:57.513462 | orchestrator | 2025-08-29 15:11:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:00.550425 | orchestrator | 2025-08-29 15:12:00 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:00.554700 | orchestrator | 2025-08-29 15:12:00 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:00.555135 | orchestrator | 2025-08-29 15:12:00 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:00.556175 | orchestrator | 2025-08-29 15:12:00 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:00.556229 | orchestrator | 2025-08-29 15:12:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:03.597218 | orchestrator | 2025-08-29 15:12:03 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:03.598912 | orchestrator | 2025-08-29 15:12:03 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:03.600982 | orchestrator | 2025-08-29 15:12:03 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:03.603340 | orchestrator | 2025-08-29 15:12:03 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:03.604172 | orchestrator | 2025-08-29 15:12:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:06.646313 | orchestrator | 2025-08-29 15:12:06 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:06.649213 | orchestrator | 2025-08-29 15:12:06 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:06.651366 | orchestrator | 2025-08-29 15:12:06 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:06.653683 | orchestrator | 2025-08-29 15:12:06 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:06.654233 | orchestrator | 2025-08-29 15:12:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:09.690083 | orchestrator | 2025-08-29 15:12:09 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:09.692124 | orchestrator | 2025-08-29 15:12:09 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:09.693483 | orchestrator | 2025-08-29 15:12:09 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:09.694985 | orchestrator | 2025-08-29 15:12:09 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:09.695389 | orchestrator | 2025-08-29 15:12:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:12.734766 | orchestrator | 2025-08-29 15:12:12 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:12.736269 | orchestrator | 2025-08-29 15:12:12 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:12.737363 | orchestrator | 2025-08-29 15:12:12 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:12.738612 | orchestrator | 2025-08-29 15:12:12 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:12.738673 | orchestrator | 2025-08-29 15:12:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:15.775023 | orchestrator | 2025-08-29 15:12:15 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:15.777369 | orchestrator | 2025-08-29 15:12:15 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:15.782445 | orchestrator | 2025-08-29 15:12:15 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:15.784713 | orchestrator | 2025-08-29 15:12:15 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:15.784789 | orchestrator | 2025-08-29 15:12:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:18.824852 | orchestrator | 2025-08-29 15:12:18 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:18.827685 | orchestrator | 2025-08-29 15:12:18 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:18.828877 | orchestrator | 2025-08-29 15:12:18 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:18.830341 | orchestrator | 2025-08-29 15:12:18 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:18.831914 | orchestrator | 2025-08-29 15:12:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:21.870829 | orchestrator | 2025-08-29 15:12:21 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:21.874339 | orchestrator | 2025-08-29 15:12:21 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:21.877086 | orchestrator | 2025-08-29 15:12:21 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:21.878490 | orchestrator | 2025-08-29 15:12:21 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:21.878531 | orchestrator | 2025-08-29 15:12:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:24.925792 | orchestrator | 2025-08-29 15:12:24 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:24.927939 | orchestrator | 2025-08-29 15:12:24 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:24.930450 | orchestrator | 2025-08-29 15:12:24 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:24.932388 | orchestrator | 2025-08-29 15:12:24 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:24.932806 | orchestrator | 2025-08-29 15:12:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:27.970362 | orchestrator | 2025-08-29 15:12:27 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:27.972718 | orchestrator | 2025-08-29 15:12:27 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:27.974914 | orchestrator | 2025-08-29 15:12:27 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:27.976270 | orchestrator | 2025-08-29 15:12:27 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:27.976303 | orchestrator | 2025-08-29 15:12:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:31.021800 | orchestrator | 2025-08-29 15:12:31 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:31.021890 | orchestrator | 2025-08-29 15:12:31 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:31.023364 | orchestrator | 2025-08-29 15:12:31 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:31.024819 | orchestrator | 2025-08-29 15:12:31 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:31.024870 | orchestrator | 2025-08-29 15:12:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:34.069950 | orchestrator | 2025-08-29 15:12:34 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:34.071843 | orchestrator | 2025-08-29 15:12:34 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:34.073968 | orchestrator | 2025-08-29 15:12:34 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:34.074949 | orchestrator | 2025-08-29 15:12:34 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:34.074991 | orchestrator | 2025-08-29 15:12:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:37.111359 | orchestrator | 2025-08-29 15:12:37 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:37.112359 | orchestrator | 2025-08-29 15:12:37 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:37.114716 | orchestrator | 2025-08-29 15:12:37 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:37.116601 | orchestrator | 2025-08-29 15:12:37 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:37.116649 | orchestrator | 2025-08-29 15:12:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:40.150782 | orchestrator | 2025-08-29 15:12:40 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:40.151711 | orchestrator | 2025-08-29 15:12:40 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:40.152950 | orchestrator | 2025-08-29 15:12:40 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:40.154277 | orchestrator | 2025-08-29 15:12:40 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:40.154301 | orchestrator | 2025-08-29 15:12:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:43.181723 | orchestrator | 2025-08-29 15:12:43 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:43.182645 | orchestrator | 2025-08-29 15:12:43 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:43.184330 | orchestrator | 2025-08-29 15:12:43 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:43.185685 | orchestrator | 2025-08-29 15:12:43 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:43.185723 | orchestrator | 2025-08-29 15:12:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:46.238128 | orchestrator | 2025-08-29 15:12:46 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:46.238716 | orchestrator | 2025-08-29 15:12:46 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:46.239879 | orchestrator | 2025-08-29 15:12:46 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:46.240989 | orchestrator | 2025-08-29 15:12:46 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:46.241016 | orchestrator | 2025-08-29 15:12:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:49.273155 | orchestrator | 2025-08-29 15:12:49 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:49.274533 | orchestrator | 2025-08-29 15:12:49 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:49.274805 | orchestrator | 2025-08-29 15:12:49 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:49.275973 | orchestrator | 2025-08-29 15:12:49 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:49.275995 | orchestrator | 2025-08-29 15:12:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:52.313477 | orchestrator | 2025-08-29 15:12:52 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:52.316772 | orchestrator | 2025-08-29 15:12:52 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:52.318276 | orchestrator | 2025-08-29 15:12:52 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:52.320531 | orchestrator | 2025-08-29 15:12:52 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:52.320569 | orchestrator | 2025-08-29 15:12:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:55.370616 | orchestrator | 2025-08-29 15:12:55 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:55.372282 | orchestrator | 2025-08-29 15:12:55 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:55.373807 | orchestrator | 2025-08-29 15:12:55 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:55.375593 | orchestrator | 2025-08-29 15:12:55 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:55.375647 | orchestrator | 2025-08-29 15:12:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:58.433316 | orchestrator | 2025-08-29 15:12:58 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:12:58.434152 | orchestrator | 2025-08-29 15:12:58 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:12:58.435450 | orchestrator | 2025-08-29 15:12:58 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:12:58.436872 | orchestrator | 2025-08-29 15:12:58 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:12:58.436931 | orchestrator | 2025-08-29 15:12:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:01.488849 | orchestrator | 2025-08-29 15:13:01 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:01.490673 | orchestrator | 2025-08-29 15:13:01 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:01.492578 | orchestrator | 2025-08-29 15:13:01 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:01.494291 | orchestrator | 2025-08-29 15:13:01 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:13:01.494872 | orchestrator | 2025-08-29 15:13:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:04.548292 | orchestrator | 2025-08-29 15:13:04 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:04.549347 | orchestrator | 2025-08-29 15:13:04 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:04.550561 | orchestrator | 2025-08-29 15:13:04 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:04.551639 | orchestrator | 2025-08-29 15:13:04 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:13:04.551677 | orchestrator | 2025-08-29 15:13:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:07.590373 | orchestrator | 2025-08-29 15:13:07 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:07.593241 | orchestrator | 2025-08-29 15:13:07 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:07.596341 | orchestrator | 2025-08-29 15:13:07 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:07.598933 | orchestrator | 2025-08-29 15:13:07 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:13:07.599063 | orchestrator | 2025-08-29 15:13:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:10.650229 | orchestrator | 2025-08-29 15:13:10 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:10.650941 | orchestrator | 2025-08-29 15:13:10 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:10.657110 | orchestrator | 2025-08-29 15:13:10 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:10.666473 | orchestrator | 2025-08-29 15:13:10 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:13:10.666568 | orchestrator | 2025-08-29 15:13:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:13.707157 | orchestrator | 2025-08-29 15:13:13 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:13.708047 | orchestrator | 2025-08-29 15:13:13 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:13.709098 | orchestrator | 2025-08-29 15:13:13 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:13.710819 | orchestrator | 2025-08-29 15:13:13 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:13:13.710866 | orchestrator | 2025-08-29 15:13:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:16.761793 | orchestrator | 2025-08-29 15:13:16 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:16.766440 | orchestrator | 2025-08-29 15:13:16 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:16.769874 | orchestrator | 2025-08-29 15:13:16 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:16.775895 | orchestrator | 2025-08-29 15:13:16 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:13:16.775979 | orchestrator | 2025-08-29 15:13:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:19.823151 | orchestrator | 2025-08-29 15:13:19 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:19.824226 | orchestrator | 2025-08-29 15:13:19 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:19.826099 | orchestrator | 2025-08-29 15:13:19 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:19.827484 | orchestrator | 2025-08-29 15:13:19 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state STARTED 2025-08-29 15:13:19.827532 | orchestrator | 2025-08-29 15:13:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:22.878104 | orchestrator | 2025-08-29 15:13:22 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:22.879201 | orchestrator | 2025-08-29 15:13:22 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:22.880633 | orchestrator | 2025-08-29 15:13:22 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:13:22.882271 | orchestrator | 2025-08-29 15:13:22 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:22.883896 | orchestrator | 2025-08-29 15:13:22 | INFO  | Task 493efe52-f259-4fab-a500-0ddf2d357c55 is in state SUCCESS 2025-08-29 15:13:22.885409 | orchestrator | 2025-08-29 15:13:22.885440 | orchestrator | 2025-08-29 15:13:22.885449 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:13:22.885477 | orchestrator | 2025-08-29 15:13:22.885485 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:13:22.885493 | orchestrator | Friday 29 August 2025 15:09:55 +0000 (0:00:00.284) 0:00:00.284 ********* 2025-08-29 15:13:22.885501 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:13:22.885515 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:13:22.885528 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:13:22.885540 | orchestrator | 2025-08-29 15:13:22.885554 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:13:22.885566 | orchestrator | Friday 29 August 2025 15:09:56 +0000 (0:00:00.333) 0:00:00.617 ********* 2025-08-29 15:13:22.885579 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-08-29 15:13:22.885591 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-08-29 15:13:22.885604 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-08-29 15:13:22.885616 | orchestrator | 2025-08-29 15:13:22.885629 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-08-29 15:13:22.885643 | orchestrator | 2025-08-29 15:13:22.885657 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:13:22.885670 | orchestrator | Friday 29 August 2025 15:09:56 +0000 (0:00:00.444) 0:00:01.062 ********* 2025-08-29 15:13:22.885678 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:13:22.885686 | orchestrator | 2025-08-29 15:13:22.885694 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-08-29 15:13:22.885702 | orchestrator | Friday 29 August 2025 15:09:57 +0000 (0:00:00.559) 0:00:01.621 ********* 2025-08-29 15:13:22.885710 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-08-29 15:13:22.885717 | orchestrator | 2025-08-29 15:13:22.885725 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-08-29 15:13:22.885733 | orchestrator | Friday 29 August 2025 15:10:00 +0000 (0:00:03.471) 0:00:05.093 ********* 2025-08-29 15:13:22.885741 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-08-29 15:13:22.885748 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-08-29 15:13:22.885756 | orchestrator | 2025-08-29 15:13:22.885764 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-08-29 15:13:22.885772 | orchestrator | Friday 29 August 2025 15:10:07 +0000 (0:00:06.891) 0:00:11.984 ********* 2025-08-29 15:13:22.885779 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:13:22.885788 | orchestrator | 2025-08-29 15:13:22.885795 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-08-29 15:13:22.885803 | orchestrator | Friday 29 August 2025 15:10:11 +0000 (0:00:03.813) 0:00:15.798 ********* 2025-08-29 15:13:22.885811 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:13:22.885819 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-08-29 15:13:22.885827 | orchestrator | 2025-08-29 15:13:22.885834 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-08-29 15:13:22.885842 | orchestrator | Friday 29 August 2025 15:10:15 +0000 (0:00:04.224) 0:00:20.022 ********* 2025-08-29 15:13:22.885850 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:13:22.885858 | orchestrator | 2025-08-29 15:13:22.885869 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-08-29 15:13:22.885882 | orchestrator | Friday 29 August 2025 15:10:19 +0000 (0:00:03.553) 0:00:23.576 ********* 2025-08-29 15:13:22.885895 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-08-29 15:13:22.885907 | orchestrator | 2025-08-29 15:13:22.885920 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-08-29 15:13:22.885932 | orchestrator | Friday 29 August 2025 15:10:24 +0000 (0:00:04.961) 0:00:28.538 ********* 2025-08-29 15:13:22.885980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:13:22.886063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:13:22.886087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:13:22.886105 | orchestrator | 2025-08-29 15:13:22.886114 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:13:22.886123 | orchestrator | Friday 29 August 2025 15:10:30 +0000 (0:00:06.042) 0:00:34.580 ********* 2025-08-29 15:13:22.886132 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:13:22.886141 | orchestrator | 2025-08-29 15:13:22.886157 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-08-29 15:13:22.886167 | orchestrator | Friday 29 August 2025 15:10:31 +0000 (0:00:01.065) 0:00:35.645 ********* 2025-08-29 15:13:22.886201 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:22.886211 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:22.886220 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:22.886229 | orchestrator | 2025-08-29 15:13:22.886237 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-08-29 15:13:22.886246 | orchestrator | Friday 29 August 2025 15:10:35 +0000 (0:00:04.557) 0:00:40.203 ********* 2025-08-29 15:13:22.886255 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:13:22.886263 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:13:22.886272 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:13:22.886281 | orchestrator | 2025-08-29 15:13:22.886289 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-08-29 15:13:22.886298 | orchestrator | Friday 29 August 2025 15:10:37 +0000 (0:00:01.964) 0:00:42.167 ********* 2025-08-29 15:13:22.886306 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:13:22.886315 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:13:22.886324 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:13:22.886332 | orchestrator | 2025-08-29 15:13:22.886341 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-08-29 15:13:22.886349 | orchestrator | Friday 29 August 2025 15:10:38 +0000 (0:00:01.225) 0:00:43.392 ********* 2025-08-29 15:13:22.886358 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:13:22.886367 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:13:22.886375 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:13:22.886384 | orchestrator | 2025-08-29 15:13:22.886392 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-08-29 15:13:22.886401 | orchestrator | Friday 29 August 2025 15:10:39 +0000 (0:00:00.702) 0:00:44.094 ********* 2025-08-29 15:13:22.886409 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:22.886417 | orchestrator | 2025-08-29 15:13:22.886424 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-08-29 15:13:22.886438 | orchestrator | Friday 29 August 2025 15:10:40 +0000 (0:00:00.368) 0:00:44.463 ********* 2025-08-29 15:13:22.886446 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:22.886454 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:22.886461 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:22.886469 | orchestrator | 2025-08-29 15:13:22.886477 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:13:22.886485 | orchestrator | Friday 29 August 2025 15:10:40 +0000 (0:00:00.375) 0:00:44.838 ********* 2025-08-29 15:13:22.886492 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:13:22.886500 | orchestrator | 2025-08-29 15:13:22.886508 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-08-29 15:13:22.886516 | orchestrator | Friday 29 August 2025 15:10:41 +0000 (0:00:01.079) 0:00:45.918 ********* 2025-08-29 15:13:22.886534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:13:22.886544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:13:22.886562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:13:22.886571 | orchestrator | 2025-08-29 15:13:22.886579 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-08-29 15:13:22.886588 | orchestrator | Friday 29 August 2025 15:10:51 +0000 (0:00:10.136) 0:00:56.054 ********* 2025-08-29 15:13:22.886602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:13:22.886616 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:22.886625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:13:22.886634 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:22.886652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:13:22.886661 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:22.886669 | orchestrator | 2025-08-29 15:13:22.886677 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-08-29 15:13:22.886685 | orchestrator | Friday 29 August 2025 15:10:55 +0000 (0:00:04.062) 0:01:00.117 ********* 2025-08-29 15:13:22.886693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:13:22.886707 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:22.886735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:13:22.886752 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:22.886761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:13:22.886775 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:22.886783 | orchestrator | 2025-08-29 15:13:22.886791 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-08-29 15:13:22.886799 | orchestrator | Friday 29 August 2025 15:10:59 +0000 (0:00:03.432) 0:01:03.549 ********* 2025-08-29 15:13:22.886807 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:22.886815 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:22.886823 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:22.886830 | orchestrator | 2025-08-29 15:13:22.886838 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-08-29 15:13:22.886846 | orchestrator | Friday 29 August 2025 15:11:05 +0000 (0:00:05.852) 0:01:09.402 ********* 2025-08-29 15:13:22.886867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:13:22.886877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:13:22.886894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:13:22.886903 | orchestrator | 2025-08-29 15:13:22.886911 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-08-29 15:13:22.886919 | orchestrator | Friday 29 August 2025 15:11:13 +0000 (0:00:08.942) 0:01:18.344 ********* 2025-08-29 15:13:22.886926 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:22.886934 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:22.886942 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:22.886950 | orchestrator | 2025-08-29 15:13:22.886958 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-08-29 15:13:22.886966 | orchestrator | Friday 29 August 2025 15:11:20 +0000 (0:00:06.885) 0:01:25.230 ********* 2025-08-29 15:13:22.886976 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:22.886990 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:22.887004 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:22.887016 | orchestrator | 2025-08-29 15:13:22.887030 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-08-29 15:13:22.887049 | orchestrator | Friday 29 August 2025 15:11:26 +0000 (0:00:05.661) 0:01:30.892 ********* 2025-08-29 15:13:22.887073 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:22.887087 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:22.887100 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:22.887112 | orchestrator | 2025-08-29 15:13:22.887126 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-08-29 15:13:22.887139 | orchestrator | Friday 29 August 2025 15:11:34 +0000 (0:00:08.045) 0:01:38.937 ********* 2025-08-29 15:13:22.887153 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:22.887167 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:22.887225 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:22.887234 | orchestrator | 2025-08-29 15:13:22.887242 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-08-29 15:13:22.887250 | orchestrator | Friday 29 August 2025 15:11:40 +0000 (0:00:06.062) 0:01:45.000 ********* 2025-08-29 15:13:22.887258 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:22.887265 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:22.887274 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:22.887288 | orchestrator | 2025-08-29 15:13:22.887301 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-08-29 15:13:22.887313 | orchestrator | Friday 29 August 2025 15:11:47 +0000 (0:00:06.632) 0:01:51.632 ********* 2025-08-29 15:13:22.887325 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:22.887337 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:22.887349 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:22.887360 | orchestrator | 2025-08-29 15:13:22.887370 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-08-29 15:13:22.887381 | orchestrator | Friday 29 August 2025 15:11:47 +0000 (0:00:00.674) 0:01:52.306 ********* 2025-08-29 15:13:22.887392 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 15:13:22.887403 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:22.887415 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 15:13:22.887426 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:22.887438 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 15:13:22.887449 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:22.887463 | orchestrator | 2025-08-29 15:13:22.887476 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-08-29 15:13:22.887490 | orchestrator | Friday 29 August 2025 15:11:58 +0000 (0:00:10.340) 0:02:02.647 ********* 2025-08-29 15:13:22.887512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:13:22.887550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:13:22.887566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:13:22.887580 | orchestrator | 2025-08-29 15:13:22.887594 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:13:22.887606 | orchestrator | Friday 29 August 2025 15:12:04 +0000 (0:00:06.220) 0:02:08.868 ********* 2025-08-29 15:13:22.887627 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:22.887640 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:22.887658 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:22.887670 | orchestrator | 2025-08-29 15:13:22.887684 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-08-29 15:13:22.887698 | orchestrator | Friday 29 August 2025 15:12:04 +0000 (0:00:00.352) 0:02:09.221 ********* 2025-08-29 15:13:22.887712 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:22.887726 | orchestrator | 2025-08-29 15:13:22.887740 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-08-29 15:13:22.887752 | orchestrator | Friday 29 August 2025 15:12:07 +0000 (0:00:02.185) 0:02:11.406 ********* 2025-08-29 15:13:22.887766 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:22.887778 | orchestrator | 2025-08-29 15:13:22.887792 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-08-29 15:13:22.887805 | orchestrator | Friday 29 August 2025 15:12:09 +0000 (0:00:02.174) 0:02:13.581 ********* 2025-08-29 15:13:22.887819 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:22.887832 | orchestrator | 2025-08-29 15:13:22.887845 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-08-29 15:13:22.887858 | orchestrator | Friday 29 August 2025 15:12:11 +0000 (0:00:02.175) 0:02:15.757 ********* 2025-08-29 15:13:22.887872 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:22.887885 | orchestrator | 2025-08-29 15:13:22.887898 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-08-29 15:13:22.887913 | orchestrator | Friday 29 August 2025 15:12:41 +0000 (0:00:29.655) 0:02:45.412 ********* 2025-08-29 15:13:22.887926 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:22.887940 | orchestrator | 2025-08-29 15:13:22.887960 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 15:13:22.887974 | orchestrator | Friday 29 August 2025 15:12:43 +0000 (0:00:02.209) 0:02:47.622 ********* 2025-08-29 15:13:22.887987 | orchestrator | 2025-08-29 15:13:22.888001 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 15:13:22.888013 | orchestrator | Friday 29 August 2025 15:12:43 +0000 (0:00:00.060) 0:02:47.683 ********* 2025-08-29 15:13:22.888026 | orchestrator | 2025-08-29 15:13:22.888039 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 15:13:22.888052 | orchestrator | Friday 29 August 2025 15:12:43 +0000 (0:00:00.070) 0:02:47.753 ********* 2025-08-29 15:13:22.888066 | orchestrator | 2025-08-29 15:13:22.888079 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-08-29 15:13:22.888093 | orchestrator | Friday 29 August 2025 15:12:43 +0000 (0:00:00.070) 0:02:47.823 ********* 2025-08-29 15:13:22.888108 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:22.888122 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:22.888135 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:22.888148 | orchestrator | 2025-08-29 15:13:22.888161 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:13:22.888221 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:13:22.888237 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:13:22.888249 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:13:22.888260 | orchestrator | 2025-08-29 15:13:22.888272 | orchestrator | 2025-08-29 15:13:22.888284 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:13:22.888296 | orchestrator | Friday 29 August 2025 15:13:19 +0000 (0:00:36.558) 0:03:24.382 ********* 2025-08-29 15:13:22.888308 | orchestrator | =============================================================================== 2025-08-29 15:13:22.888320 | orchestrator | glance : Restart glance-api container ---------------------------------- 36.56s 2025-08-29 15:13:22.888346 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.66s 2025-08-29 15:13:22.888359 | orchestrator | glance : Copying over glance-haproxy-tls.cfg --------------------------- 10.34s 2025-08-29 15:13:22.888371 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates -------- 10.14s 2025-08-29 15:13:22.888383 | orchestrator | glance : Copying over config.json files for services -------------------- 8.94s 2025-08-29 15:13:22.888394 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 8.04s 2025-08-29 15:13:22.888406 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.89s 2025-08-29 15:13:22.888419 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.89s 2025-08-29 15:13:22.888431 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 6.63s 2025-08-29 15:13:22.888443 | orchestrator | glance : Check glance containers ---------------------------------------- 6.22s 2025-08-29 15:13:22.888457 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.06s 2025-08-29 15:13:22.888470 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.04s 2025-08-29 15:13:22.888484 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.85s 2025-08-29 15:13:22.888497 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.66s 2025-08-29 15:13:22.888511 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.96s 2025-08-29 15:13:22.888524 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.56s 2025-08-29 15:13:22.888538 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.22s 2025-08-29 15:13:22.888552 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.06s 2025-08-29 15:13:22.888575 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.81s 2025-08-29 15:13:22.888590 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.55s 2025-08-29 15:13:22.888603 | orchestrator | 2025-08-29 15:13:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:25.923223 | orchestrator | 2025-08-29 15:13:25 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:25.923312 | orchestrator | 2025-08-29 15:13:25 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:25.924858 | orchestrator | 2025-08-29 15:13:25 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:13:25.925652 | orchestrator | 2025-08-29 15:13:25 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:25.925689 | orchestrator | 2025-08-29 15:13:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:28.959633 | orchestrator | 2025-08-29 15:13:28 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:28.960340 | orchestrator | 2025-08-29 15:13:28 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:28.961993 | orchestrator | 2025-08-29 15:13:28 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:13:28.962947 | orchestrator | 2025-08-29 15:13:28 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:28.962981 | orchestrator | 2025-08-29 15:13:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:32.001264 | orchestrator | 2025-08-29 15:13:32 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:32.003535 | orchestrator | 2025-08-29 15:13:32 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:32.005134 | orchestrator | 2025-08-29 15:13:32 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:13:32.007377 | orchestrator | 2025-08-29 15:13:32 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:32.007447 | orchestrator | 2025-08-29 15:13:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:35.040217 | orchestrator | 2025-08-29 15:13:35 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:35.041804 | orchestrator | 2025-08-29 15:13:35 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:35.043628 | orchestrator | 2025-08-29 15:13:35 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:13:35.045427 | orchestrator | 2025-08-29 15:13:35 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:35.045470 | orchestrator | 2025-08-29 15:13:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:38.086468 | orchestrator | 2025-08-29 15:13:38 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:38.087728 | orchestrator | 2025-08-29 15:13:38 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:38.089507 | orchestrator | 2025-08-29 15:13:38 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:13:38.091301 | orchestrator | 2025-08-29 15:13:38 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:38.091333 | orchestrator | 2025-08-29 15:13:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:41.126707 | orchestrator | 2025-08-29 15:13:41 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:41.128839 | orchestrator | 2025-08-29 15:13:41 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:41.131416 | orchestrator | 2025-08-29 15:13:41 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:13:41.133533 | orchestrator | 2025-08-29 15:13:41 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:41.133589 | orchestrator | 2025-08-29 15:13:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:44.171824 | orchestrator | 2025-08-29 15:13:44 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:44.173435 | orchestrator | 2025-08-29 15:13:44 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:44.174585 | orchestrator | 2025-08-29 15:13:44 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:13:44.176319 | orchestrator | 2025-08-29 15:13:44 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:44.176366 | orchestrator | 2025-08-29 15:13:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:47.212519 | orchestrator | 2025-08-29 15:13:47 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:47.213746 | orchestrator | 2025-08-29 15:13:47 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:47.214779 | orchestrator | 2025-08-29 15:13:47 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:13:47.215812 | orchestrator | 2025-08-29 15:13:47 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:47.215855 | orchestrator | 2025-08-29 15:13:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:50.246171 | orchestrator | 2025-08-29 15:13:50 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:50.248412 | orchestrator | 2025-08-29 15:13:50 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:50.249975 | orchestrator | 2025-08-29 15:13:50 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:13:50.251451 | orchestrator | 2025-08-29 15:13:50 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:50.251488 | orchestrator | 2025-08-29 15:13:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:53.294450 | orchestrator | 2025-08-29 15:13:53 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:53.296657 | orchestrator | 2025-08-29 15:13:53 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:53.298781 | orchestrator | 2025-08-29 15:13:53 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:13:53.300329 | orchestrator | 2025-08-29 15:13:53 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state STARTED 2025-08-29 15:13:53.300373 | orchestrator | 2025-08-29 15:13:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:56.339911 | orchestrator | 2025-08-29 15:13:56 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:56.342127 | orchestrator | 2025-08-29 15:13:56 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:56.343690 | orchestrator | 2025-08-29 15:13:56 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:13:56.345880 | orchestrator | 2025-08-29 15:13:56.345943 | orchestrator | 2025-08-29 15:13:56 | INFO  | Task 4b5a6cdc-c426-41de-a796-f3bba97e5aff is in state SUCCESS 2025-08-29 15:13:56.346060 | orchestrator | 2025-08-29 15:13:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:56.347591 | orchestrator | 2025-08-29 15:13:56.347658 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:13:56.347669 | orchestrator | 2025-08-29 15:13:56.347677 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:13:56.347684 | orchestrator | Friday 29 August 2025 15:10:24 +0000 (0:00:00.723) 0:00:00.723 ********* 2025-08-29 15:13:56.347691 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:13:56.347700 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:13:56.347707 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:13:56.347713 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:13:56.347719 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:13:56.347726 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:13:56.347732 | orchestrator | 2025-08-29 15:13:56.347738 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:13:56.347745 | orchestrator | Friday 29 August 2025 15:10:25 +0000 (0:00:01.517) 0:00:02.240 ********* 2025-08-29 15:13:56.347751 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-08-29 15:13:56.347758 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-08-29 15:13:56.347765 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-08-29 15:13:56.347771 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-08-29 15:13:56.347778 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-08-29 15:13:56.347784 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-08-29 15:13:56.347790 | orchestrator | 2025-08-29 15:13:56.347797 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-08-29 15:13:56.347804 | orchestrator | 2025-08-29 15:13:56.347810 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:13:56.347817 | orchestrator | Friday 29 August 2025 15:10:27 +0000 (0:00:01.513) 0:00:03.753 ********* 2025-08-29 15:13:56.347824 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:13:56.347851 | orchestrator | 2025-08-29 15:13:56.347858 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-08-29 15:13:56.347865 | orchestrator | Friday 29 August 2025 15:10:29 +0000 (0:00:01.948) 0:00:05.702 ********* 2025-08-29 15:13:56.347872 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-08-29 15:13:56.347878 | orchestrator | 2025-08-29 15:13:56.347898 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-08-29 15:13:56.347905 | orchestrator | Friday 29 August 2025 15:10:32 +0000 (0:00:03.553) 0:00:09.256 ********* 2025-08-29 15:13:56.347913 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-08-29 15:13:56.347919 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-08-29 15:13:56.347926 | orchestrator | 2025-08-29 15:13:56.347932 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-08-29 15:13:56.347939 | orchestrator | Friday 29 August 2025 15:10:39 +0000 (0:00:06.308) 0:00:15.564 ********* 2025-08-29 15:13:56.347945 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:13:56.347951 | orchestrator | 2025-08-29 15:13:56.347958 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-08-29 15:13:56.347965 | orchestrator | Friday 29 August 2025 15:10:42 +0000 (0:00:03.629) 0:00:19.194 ********* 2025-08-29 15:13:56.347972 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:13:56.347979 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-08-29 15:13:56.347985 | orchestrator | 2025-08-29 15:13:56.347991 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-08-29 15:13:56.347998 | orchestrator | Friday 29 August 2025 15:10:47 +0000 (0:00:04.520) 0:00:23.714 ********* 2025-08-29 15:13:56.348004 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:13:56.348021 | orchestrator | 2025-08-29 15:13:56.348028 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-08-29 15:13:56.348035 | orchestrator | Friday 29 August 2025 15:10:51 +0000 (0:00:03.812) 0:00:27.527 ********* 2025-08-29 15:13:56.348042 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-08-29 15:13:56.348048 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-08-29 15:13:56.348055 | orchestrator | 2025-08-29 15:13:56.348061 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-08-29 15:13:56.348068 | orchestrator | Friday 29 August 2025 15:10:59 +0000 (0:00:08.247) 0:00:35.774 ********* 2025-08-29 15:13:56.348077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.348099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.348119 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348134 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348175 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.348199 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348213 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348220 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348226 | orchestrator | 2025-08-29 15:13:56.348236 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:13:56.348248 | orchestrator | Friday 29 August 2025 15:11:03 +0000 (0:00:03.862) 0:00:39.636 ********* 2025-08-29 15:13:56.348255 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:56.348272 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:56.348278 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:56.348284 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:56.348290 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:56.348297 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:56.348303 | orchestrator | 2025-08-29 15:13:56.348309 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:13:56.348316 | orchestrator | Friday 29 August 2025 15:11:04 +0000 (0:00:01.007) 0:00:40.644 ********* 2025-08-29 15:13:56.348323 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:56.348330 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:56.348336 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:56.348342 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:13:56.348349 | orchestrator | 2025-08-29 15:13:56.348355 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-08-29 15:13:56.348362 | orchestrator | Friday 29 August 2025 15:11:05 +0000 (0:00:01.556) 0:00:42.201 ********* 2025-08-29 15:13:56.348368 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-08-29 15:13:56.348374 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-08-29 15:13:56.348381 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-08-29 15:13:56.348387 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-08-29 15:13:56.348394 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-08-29 15:13:56.348400 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-08-29 15:13:56.348407 | orchestrator | 2025-08-29 15:13:56.348413 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-08-29 15:13:56.348419 | orchestrator | Friday 29 August 2025 15:11:09 +0000 (0:00:03.302) 0:00:45.504 ********* 2025-08-29 15:13:56.348431 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:13:56.348439 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:13:56.348447 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:13:56.348475 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:13:56.348483 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:13:56.348493 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:13:56.348501 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:13:56.348508 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:13:56.348526 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:13:56.348533 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:13:56.348544 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:13:56.348551 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:13:56.348563 | orchestrator | 2025-08-29 15:13:56.348569 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-08-29 15:13:56.348576 | orchestrator | Friday 29 August 2025 15:11:15 +0000 (0:00:05.965) 0:00:51.470 ********* 2025-08-29 15:13:56.348583 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:13:56.348591 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:13:56.348597 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:13:56.348603 | orchestrator | 2025-08-29 15:13:56.348610 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-08-29 15:13:56.348616 | orchestrator | Friday 29 August 2025 15:11:17 +0000 (0:00:02.536) 0:00:54.006 ********* 2025-08-29 15:13:56.348622 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-08-29 15:13:56.348628 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-08-29 15:13:56.348635 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-08-29 15:13:56.348641 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:13:56.348647 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:13:56.348663 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:13:56.348669 | orchestrator | 2025-08-29 15:13:56.348676 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-08-29 15:13:56.348683 | orchestrator | Friday 29 August 2025 15:11:20 +0000 (0:00:03.203) 0:00:57.210 ********* 2025-08-29 15:13:56.348689 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-08-29 15:13:56.348696 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-08-29 15:13:56.348702 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-08-29 15:13:56.348709 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-08-29 15:13:56.348715 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-08-29 15:13:56.348722 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-08-29 15:13:56.348728 | orchestrator | 2025-08-29 15:13:56.348735 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-08-29 15:13:56.348742 | orchestrator | Friday 29 August 2025 15:11:21 +0000 (0:00:01.178) 0:00:58.389 ********* 2025-08-29 15:13:56.348748 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:56.348755 | orchestrator | 2025-08-29 15:13:56.348761 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-08-29 15:13:56.348767 | orchestrator | Friday 29 August 2025 15:11:22 +0000 (0:00:00.136) 0:00:58.526 ********* 2025-08-29 15:13:56.348774 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:56.348780 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:56.348787 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:56.348792 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:56.348798 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:56.348805 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:56.348812 | orchestrator | 2025-08-29 15:13:56.348819 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:13:56.348826 | orchestrator | Friday 29 August 2025 15:11:23 +0000 (0:00:01.215) 0:00:59.741 ********* 2025-08-29 15:13:56.348833 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:13:56.348841 | orchestrator | 2025-08-29 15:13:56.348848 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-08-29 15:13:56.348854 | orchestrator | Friday 29 August 2025 15:11:25 +0000 (0:00:02.117) 0:01:01.859 ********* 2025-08-29 15:13:56.348865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.348883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.348896 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.348914 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348980 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.348987 | orchestrator | 2025-08-29 15:13:56.349011 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-08-29 15:13:56.349018 | orchestrator | Friday 29 August 2025 15:11:29 +0000 (0:00:04.208) 0:01:06.068 ********* 2025-08-29 15:13:56.349025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:13:56.349037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349043 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:56.349051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:13:56.349058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349069 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:56.349080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:13:56.349087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349093 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:56.349108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349127 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:56.349133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349209 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:56.349215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349228 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:56.349235 | orchestrator | 2025-08-29 15:13:56.349241 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-08-29 15:13:56.349248 | orchestrator | Friday 29 August 2025 15:11:33 +0000 (0:00:04.280) 0:01:10.348 ********* 2025-08-29 15:13:56.349260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:13:56.349272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349280 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:56.349290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:13:56.349297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349325 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:56.349344 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:56.349351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:13:56.349367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349374 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:56.349380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349394 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:56.349405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349423 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:56.349430 | orchestrator | 2025-08-29 15:13:56.349436 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-08-29 15:13:56.349443 | orchestrator | Friday 29 August 2025 15:11:36 +0000 (0:00:02.561) 0:01:12.909 ********* 2025-08-29 15:13:56.349453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.349460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.349467 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.349520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349546 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349593 | orchestrator | 2025-08-29 15:13:56.349600 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-08-29 15:13:56.349606 | orchestrator | Friday 29 August 2025 15:11:40 +0000 (0:00:04.506) 0:01:17.415 ********* 2025-08-29 15:13:56.349613 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 15:13:56.349620 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:56.349626 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 15:13:56.349633 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 15:13:56.349639 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:56.349646 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 15:13:56.349652 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 15:13:56.349659 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:56.349665 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 15:13:56.349671 | orchestrator | 2025-08-29 15:13:56.349678 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-08-29 15:13:56.349684 | orchestrator | Friday 29 August 2025 15:11:43 +0000 (0:00:02.975) 0:01:20.391 ********* 2025-08-29 15:13:56.349691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.349708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.349716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.349728 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349753 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349783 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349790 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.349809 | orchestrator | 2025-08-29 15:13:56.349817 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-08-29 15:13:56.349823 | orchestrator | Friday 29 August 2025 15:11:59 +0000 (0:00:15.720) 0:01:36.111 ********* 2025-08-29 15:13:56.349834 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:56.349841 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:56.349848 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:56.349854 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:13:56.349876 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:13:56.349882 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:13:56.349888 | orchestrator | 2025-08-29 15:13:56.349895 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-08-29 15:13:56.349902 | orchestrator | Friday 29 August 2025 15:12:02 +0000 (0:00:03.141) 0:01:39.253 ********* 2025-08-29 15:13:56.349909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:13:56.349919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349925 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:56.349932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:13:56.349959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349966 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:56.349977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:13:56.349984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.349990 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:56.350001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.350008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.350075 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:56.350085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.350092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.350100 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:56.350396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.350469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:13:56.350481 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:56.350489 | orchestrator | 2025-08-29 15:13:56.350497 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-08-29 15:13:56.350504 | orchestrator | Friday 29 August 2025 15:12:04 +0000 (0:00:01.775) 0:01:41.028 ********* 2025-08-29 15:13:56.350521 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:56.350528 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:56.350535 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:56.350541 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:56.350547 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:56.350553 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:56.350560 | orchestrator | 2025-08-29 15:13:56.350584 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-08-29 15:13:56.350588 | orchestrator | Friday 29 August 2025 15:12:05 +0000 (0:00:00.699) 0:01:41.728 ********* 2025-08-29 15:13:56.350592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.350598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.350612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:56.350619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.350630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.350643 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.350650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.350665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.350672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.350679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.350696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.350703 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:56.350709 | orchestrator | 2025-08-29 15:13:56.350715 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:13:56.350724 | orchestrator | Friday 29 August 2025 15:12:08 +0000 (0:00:02.729) 0:01:44.458 ********* 2025-08-29 15:13:56.350731 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:56.350739 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:56.350744 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:56.350750 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:56.350755 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:56.350761 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:56.350768 | orchestrator | 2025-08-29 15:13:56.350777 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-08-29 15:13:56.350784 | orchestrator | Friday 29 August 2025 15:12:08 +0000 (0:00:00.630) 0:01:45.088 ********* 2025-08-29 15:13:56.350789 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:56.350796 | orchestrator | 2025-08-29 15:13:56.350802 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-08-29 15:13:56.350808 | orchestrator | Friday 29 August 2025 15:12:11 +0000 (0:00:02.656) 0:01:47.745 ********* 2025-08-29 15:13:56.350814 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:56.350821 | orchestrator | 2025-08-29 15:13:56.350827 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-08-29 15:13:56.350834 | orchestrator | Friday 29 August 2025 15:12:13 +0000 (0:00:02.152) 0:01:49.898 ********* 2025-08-29 15:13:56.350840 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:56.350847 | orchestrator | 2025-08-29 15:13:56.350851 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:13:56.350855 | orchestrator | Friday 29 August 2025 15:12:34 +0000 (0:00:21.206) 0:02:11.105 ********* 2025-08-29 15:13:56.350859 | orchestrator | 2025-08-29 15:13:56.350866 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:13:56.350871 | orchestrator | Friday 29 August 2025 15:12:34 +0000 (0:00:00.078) 0:02:11.184 ********* 2025-08-29 15:13:56.350875 | orchestrator | 2025-08-29 15:13:56.350879 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:13:56.350894 | orchestrator | Friday 29 August 2025 15:12:34 +0000 (0:00:00.076) 0:02:11.260 ********* 2025-08-29 15:13:56.350910 | orchestrator | 2025-08-29 15:13:56.350917 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:13:56.350924 | orchestrator | Friday 29 August 2025 15:12:34 +0000 (0:00:00.075) 0:02:11.336 ********* 2025-08-29 15:13:56.350930 | orchestrator | 2025-08-29 15:13:56.350944 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:13:56.350951 | orchestrator | Friday 29 August 2025 15:12:34 +0000 (0:00:00.071) 0:02:11.407 ********* 2025-08-29 15:13:56.350957 | orchestrator | 2025-08-29 15:13:56.350965 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:13:56.350968 | orchestrator | Friday 29 August 2025 15:12:35 +0000 (0:00:00.092) 0:02:11.500 ********* 2025-08-29 15:13:56.350973 | orchestrator | 2025-08-29 15:13:56.350977 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-08-29 15:13:56.350981 | orchestrator | Friday 29 August 2025 15:12:35 +0000 (0:00:00.086) 0:02:11.586 ********* 2025-08-29 15:13:56.350984 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:56.350988 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:56.350992 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:56.350996 | orchestrator | 2025-08-29 15:13:56.351000 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-08-29 15:13:56.351004 | orchestrator | Friday 29 August 2025 15:13:01 +0000 (0:00:26.640) 0:02:38.227 ********* 2025-08-29 15:13:56.351008 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:56.351012 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:56.351015 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:56.351019 | orchestrator | 2025-08-29 15:13:56.351023 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-08-29 15:13:56.351027 | orchestrator | Friday 29 August 2025 15:13:09 +0000 (0:00:08.175) 0:02:46.402 ********* 2025-08-29 15:13:56.351031 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:13:56.351035 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:13:56.351039 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:13:56.351043 | orchestrator | 2025-08-29 15:13:56.351051 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-08-29 15:13:56.351055 | orchestrator | Friday 29 August 2025 15:13:47 +0000 (0:00:37.661) 0:03:24.064 ********* 2025-08-29 15:13:56.351060 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:13:56.351066 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:13:56.351072 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:13:56.351078 | orchestrator | 2025-08-29 15:13:56.351084 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-08-29 15:13:56.351090 | orchestrator | Friday 29 August 2025 15:13:54 +0000 (0:00:06.538) 0:03:30.602 ********* 2025-08-29 15:13:56.351095 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:56.351101 | orchestrator | 2025-08-29 15:13:56.351106 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:13:56.351113 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:13:56.351119 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:13:56.351123 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:13:56.351127 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:13:56.351130 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:13:56.351134 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:13:56.351138 | orchestrator | 2025-08-29 15:13:56.351165 | orchestrator | 2025-08-29 15:13:56.351169 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:13:56.351173 | orchestrator | Friday 29 August 2025 15:13:54 +0000 (0:00:00.804) 0:03:31.407 ********* 2025-08-29 15:13:56.351182 | orchestrator | =============================================================================== 2025-08-29 15:13:56.351187 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 37.66s 2025-08-29 15:13:56.351190 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 26.64s 2025-08-29 15:13:56.351194 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.21s 2025-08-29 15:13:56.351198 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.72s 2025-08-29 15:13:56.351202 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.25s 2025-08-29 15:13:56.351206 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.18s 2025-08-29 15:13:56.351209 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.54s 2025-08-29 15:13:56.351213 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.31s 2025-08-29 15:13:56.351223 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.97s 2025-08-29 15:13:56.351227 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.52s 2025-08-29 15:13:56.351230 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.51s 2025-08-29 15:13:56.351234 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS certificate --- 4.28s 2025-08-29 15:13:56.351238 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.21s 2025-08-29 15:13:56.351243 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.86s 2025-08-29 15:13:56.351249 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.81s 2025-08-29 15:13:56.351255 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.63s 2025-08-29 15:13:56.351261 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.55s 2025-08-29 15:13:56.351268 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.30s 2025-08-29 15:13:56.351274 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.20s 2025-08-29 15:13:56.351280 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.14s 2025-08-29 15:13:59.398743 | orchestrator | 2025-08-29 15:13:59 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:13:59.399791 | orchestrator | 2025-08-29 15:13:59 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:13:59.402316 | orchestrator | 2025-08-29 15:13:59 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:13:59.403980 | orchestrator | 2025-08-29 15:13:59 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:13:59.404151 | orchestrator | 2025-08-29 15:13:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:02.436647 | orchestrator | 2025-08-29 15:14:02 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:02.437477 | orchestrator | 2025-08-29 15:14:02 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:02.438821 | orchestrator | 2025-08-29 15:14:02 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:02.439816 | orchestrator | 2025-08-29 15:14:02 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:14:02.439848 | orchestrator | 2025-08-29 15:14:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:05.469010 | orchestrator | 2025-08-29 15:14:05 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:05.470120 | orchestrator | 2025-08-29 15:14:05 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:05.471750 | orchestrator | 2025-08-29 15:14:05 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:05.472934 | orchestrator | 2025-08-29 15:14:05 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:14:05.473511 | orchestrator | 2025-08-29 15:14:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:08.498218 | orchestrator | 2025-08-29 15:14:08 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:08.500775 | orchestrator | 2025-08-29 15:14:08 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:08.501851 | orchestrator | 2025-08-29 15:14:08 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:08.503288 | orchestrator | 2025-08-29 15:14:08 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:14:08.503327 | orchestrator | 2025-08-29 15:14:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:11.541770 | orchestrator | 2025-08-29 15:14:11 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:11.541855 | orchestrator | 2025-08-29 15:14:11 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:11.542906 | orchestrator | 2025-08-29 15:14:11 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:11.544119 | orchestrator | 2025-08-29 15:14:11 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:14:11.544211 | orchestrator | 2025-08-29 15:14:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:14.583860 | orchestrator | 2025-08-29 15:14:14 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:14.583950 | orchestrator | 2025-08-29 15:14:14 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:14.587213 | orchestrator | 2025-08-29 15:14:14 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:14.590840 | orchestrator | 2025-08-29 15:14:14 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:14:14.590973 | orchestrator | 2025-08-29 15:14:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:17.625709 | orchestrator | 2025-08-29 15:14:17 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:17.627681 | orchestrator | 2025-08-29 15:14:17 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:17.627972 | orchestrator | 2025-08-29 15:14:17 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:17.629146 | orchestrator | 2025-08-29 15:14:17 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:14:17.629179 | orchestrator | 2025-08-29 15:14:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:20.656092 | orchestrator | 2025-08-29 15:14:20 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:20.656865 | orchestrator | 2025-08-29 15:14:20 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:20.658149 | orchestrator | 2025-08-29 15:14:20 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:20.659076 | orchestrator | 2025-08-29 15:14:20 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state STARTED 2025-08-29 15:14:20.659101 | orchestrator | 2025-08-29 15:14:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:23.697076 | orchestrator | 2025-08-29 15:14:23 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:23.698426 | orchestrator | 2025-08-29 15:14:23 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:23.700958 | orchestrator | 2025-08-29 15:14:23 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:23.702416 | orchestrator | 2025-08-29 15:14:23 | INFO  | Task 4f72a172-f278-4abf-9670-36307dfd0624 is in state SUCCESS 2025-08-29 15:14:23.702461 | orchestrator | 2025-08-29 15:14:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:26.739715 | orchestrator | 2025-08-29 15:14:26 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:26.741474 | orchestrator | 2025-08-29 15:14:26 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:26.743270 | orchestrator | 2025-08-29 15:14:26 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:26.743315 | orchestrator | 2025-08-29 15:14:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:29.783235 | orchestrator | 2025-08-29 15:14:29 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:29.784195 | orchestrator | 2025-08-29 15:14:29 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:29.785247 | orchestrator | 2025-08-29 15:14:29 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:29.785277 | orchestrator | 2025-08-29 15:14:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:32.820763 | orchestrator | 2025-08-29 15:14:32 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:32.822461 | orchestrator | 2025-08-29 15:14:32 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:32.824182 | orchestrator | 2025-08-29 15:14:32 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:32.824228 | orchestrator | 2025-08-29 15:14:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:35.855791 | orchestrator | 2025-08-29 15:14:35 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:35.857790 | orchestrator | 2025-08-29 15:14:35 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:35.860740 | orchestrator | 2025-08-29 15:14:35 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:35.860813 | orchestrator | 2025-08-29 15:14:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:38.894417 | orchestrator | 2025-08-29 15:14:38 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:38.895743 | orchestrator | 2025-08-29 15:14:38 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:38.897690 | orchestrator | 2025-08-29 15:14:38 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:38.897738 | orchestrator | 2025-08-29 15:14:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:41.935692 | orchestrator | 2025-08-29 15:14:41 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:41.935783 | orchestrator | 2025-08-29 15:14:41 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:41.936740 | orchestrator | 2025-08-29 15:14:41 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:41.936869 | orchestrator | 2025-08-29 15:14:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:44.975203 | orchestrator | 2025-08-29 15:14:44 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:44.976432 | orchestrator | 2025-08-29 15:14:44 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:44.977650 | orchestrator | 2025-08-29 15:14:44 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:44.977700 | orchestrator | 2025-08-29 15:14:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:48.010568 | orchestrator | 2025-08-29 15:14:48 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:48.010998 | orchestrator | 2025-08-29 15:14:48 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:48.012153 | orchestrator | 2025-08-29 15:14:48 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:48.012183 | orchestrator | 2025-08-29 15:14:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:51.050995 | orchestrator | 2025-08-29 15:14:51 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:51.052074 | orchestrator | 2025-08-29 15:14:51 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state STARTED 2025-08-29 15:14:51.054163 | orchestrator | 2025-08-29 15:14:51 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:51.054240 | orchestrator | 2025-08-29 15:14:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:54.108360 | orchestrator | 2025-08-29 15:14:54 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:54.108452 | orchestrator | 2025-08-29 15:14:54 | INFO  | Task 6be870a2-8b2a-4ea9-94c1-a14da4977688 is in state SUCCESS 2025-08-29 15:14:54.109368 | orchestrator | 2025-08-29 15:14:54 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:54.109403 | orchestrator | 2025-08-29 15:14:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:57.143839 | orchestrator | 2025-08-29 15:14:57 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:14:57.145655 | orchestrator | 2025-08-29 15:14:57 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:14:57.145708 | orchestrator | 2025-08-29 15:14:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:00.200328 | orchestrator | 2025-08-29 15:15:00 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:00.204003 | orchestrator | 2025-08-29 15:15:00 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:00.204073 | orchestrator | 2025-08-29 15:15:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:03.251895 | orchestrator | 2025-08-29 15:15:03 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:03.253240 | orchestrator | 2025-08-29 15:15:03 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:03.253293 | orchestrator | 2025-08-29 15:15:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:06.292422 | orchestrator | 2025-08-29 15:15:06 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:06.292534 | orchestrator | 2025-08-29 15:15:06 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:06.292551 | orchestrator | 2025-08-29 15:15:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:09.336263 | orchestrator | 2025-08-29 15:15:09 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:09.336697 | orchestrator | 2025-08-29 15:15:09 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:09.337741 | orchestrator | 2025-08-29 15:15:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:12.376675 | orchestrator | 2025-08-29 15:15:12 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:12.377684 | orchestrator | 2025-08-29 15:15:12 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:12.377722 | orchestrator | 2025-08-29 15:15:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:15.415855 | orchestrator | 2025-08-29 15:15:15 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:15.417294 | orchestrator | 2025-08-29 15:15:15 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:15.417342 | orchestrator | 2025-08-29 15:15:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:18.458973 | orchestrator | 2025-08-29 15:15:18 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:18.459807 | orchestrator | 2025-08-29 15:15:18 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:18.459838 | orchestrator | 2025-08-29 15:15:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:21.493322 | orchestrator | 2025-08-29 15:15:21 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:21.494555 | orchestrator | 2025-08-29 15:15:21 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:21.494642 | orchestrator | 2025-08-29 15:15:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:24.542365 | orchestrator | 2025-08-29 15:15:24 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:24.544095 | orchestrator | 2025-08-29 15:15:24 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:24.544165 | orchestrator | 2025-08-29 15:15:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:27.581482 | orchestrator | 2025-08-29 15:15:27 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:27.583133 | orchestrator | 2025-08-29 15:15:27 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:27.583849 | orchestrator | 2025-08-29 15:15:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:30.621779 | orchestrator | 2025-08-29 15:15:30 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:30.622911 | orchestrator | 2025-08-29 15:15:30 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:30.622956 | orchestrator | 2025-08-29 15:15:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:33.663983 | orchestrator | 2025-08-29 15:15:33 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:33.667102 | orchestrator | 2025-08-29 15:15:33 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:33.667170 | orchestrator | 2025-08-29 15:15:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:36.700660 | orchestrator | 2025-08-29 15:15:36 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:36.702784 | orchestrator | 2025-08-29 15:15:36 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:36.702838 | orchestrator | 2025-08-29 15:15:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:39.758662 | orchestrator | 2025-08-29 15:15:39 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:39.759621 | orchestrator | 2025-08-29 15:15:39 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:39.759670 | orchestrator | 2025-08-29 15:15:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:42.795344 | orchestrator | 2025-08-29 15:15:42 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:42.795869 | orchestrator | 2025-08-29 15:15:42 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:42.795908 | orchestrator | 2025-08-29 15:15:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:45.851915 | orchestrator | 2025-08-29 15:15:45 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:45.853556 | orchestrator | 2025-08-29 15:15:45 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:45.853602 | orchestrator | 2025-08-29 15:15:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:48.900270 | orchestrator | 2025-08-29 15:15:48 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:48.901691 | orchestrator | 2025-08-29 15:15:48 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:48.901722 | orchestrator | 2025-08-29 15:15:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:51.932477 | orchestrator | 2025-08-29 15:15:51 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:51.932817 | orchestrator | 2025-08-29 15:15:51 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:51.932849 | orchestrator | 2025-08-29 15:15:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:54.974684 | orchestrator | 2025-08-29 15:15:54 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:54.974786 | orchestrator | 2025-08-29 15:15:54 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:54.974803 | orchestrator | 2025-08-29 15:15:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:58.020979 | orchestrator | 2025-08-29 15:15:58 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:15:58.021577 | orchestrator | 2025-08-29 15:15:58 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:15:58.021625 | orchestrator | 2025-08-29 15:15:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:01.063986 | orchestrator | 2025-08-29 15:16:01 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:16:01.064968 | orchestrator | 2025-08-29 15:16:01 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:01.065047 | orchestrator | 2025-08-29 15:16:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:04.111472 | orchestrator | 2025-08-29 15:16:04 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:16:04.111555 | orchestrator | 2025-08-29 15:16:04 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:04.111565 | orchestrator | 2025-08-29 15:16:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:07.147164 | orchestrator | 2025-08-29 15:16:07 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:16:07.149807 | orchestrator | 2025-08-29 15:16:07 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:07.150130 | orchestrator | 2025-08-29 15:16:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:10.192106 | orchestrator | 2025-08-29 15:16:10 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:16:10.194238 | orchestrator | 2025-08-29 15:16:10 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:10.194292 | orchestrator | 2025-08-29 15:16:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:13.242171 | orchestrator | 2025-08-29 15:16:13 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:16:13.245350 | orchestrator | 2025-08-29 15:16:13 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:13.245985 | orchestrator | 2025-08-29 15:16:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:16.291207 | orchestrator | 2025-08-29 15:16:16 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state STARTED 2025-08-29 15:16:16.293604 | orchestrator | 2025-08-29 15:16:16 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:16.293853 | orchestrator | 2025-08-29 15:16:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:19.338204 | orchestrator | 2025-08-29 15:16:19 | INFO  | Task 791251ea-8711-46ec-943a-ee26e77b30d6 is in state SUCCESS 2025-08-29 15:16:19.340603 | orchestrator | 2025-08-29 15:16:19.340685 | orchestrator | 2025-08-29 15:16:19.340702 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:16:19.340725 | orchestrator | 2025-08-29 15:16:19.340745 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:16:19.340833 | orchestrator | Friday 29 August 2025 15:13:25 +0000 (0:00:00.281) 0:00:00.281 ********* 2025-08-29 15:16:19.340854 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:19.340875 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:16:19.340965 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:16:19.341159 | orchestrator | 2025-08-29 15:16:19.341171 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:16:19.341183 | orchestrator | Friday 29 August 2025 15:13:25 +0000 (0:00:00.316) 0:00:00.597 ********* 2025-08-29 15:16:19.341194 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-08-29 15:16:19.341205 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-08-29 15:16:19.341217 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-08-29 15:16:19.341228 | orchestrator | 2025-08-29 15:16:19.341240 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-08-29 15:16:19.341250 | orchestrator | 2025-08-29 15:16:19.341261 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:16:19.341272 | orchestrator | Friday 29 August 2025 15:13:25 +0000 (0:00:00.440) 0:00:01.037 ********* 2025-08-29 15:16:19.341283 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:19.341295 | orchestrator | 2025-08-29 15:16:19.341306 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-08-29 15:16:19.341316 | orchestrator | Friday 29 August 2025 15:13:26 +0000 (0:00:00.547) 0:00:01.584 ********* 2025-08-29 15:16:19.341328 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-08-29 15:16:19.341339 | orchestrator | 2025-08-29 15:16:19.341350 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-08-29 15:16:19.341360 | orchestrator | Friday 29 August 2025 15:13:30 +0000 (0:00:03.719) 0:00:05.304 ********* 2025-08-29 15:16:19.341371 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-08-29 15:16:19.341390 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-08-29 15:16:19.341412 | orchestrator | 2025-08-29 15:16:19.341467 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-08-29 15:16:19.341485 | orchestrator | Friday 29 August 2025 15:13:36 +0000 (0:00:06.824) 0:00:12.128 ********* 2025-08-29 15:16:19.341500 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:16:19.341516 | orchestrator | 2025-08-29 15:16:19.341531 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-08-29 15:16:19.341659 | orchestrator | Friday 29 August 2025 15:13:40 +0000 (0:00:03.407) 0:00:15.536 ********* 2025-08-29 15:16:19.341674 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:16:19.341690 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 15:16:19.341707 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 15:16:19.341723 | orchestrator | 2025-08-29 15:16:19.341758 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-08-29 15:16:19.341776 | orchestrator | Friday 29 August 2025 15:13:48 +0000 (0:00:08.510) 0:00:24.046 ********* 2025-08-29 15:16:19.341793 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:16:19.341809 | orchestrator | 2025-08-29 15:16:19.341826 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-08-29 15:16:19.341843 | orchestrator | Friday 29 August 2025 15:13:53 +0000 (0:00:04.259) 0:00:28.305 ********* 2025-08-29 15:16:19.341860 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 15:16:19.341872 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 15:16:19.341881 | orchestrator | 2025-08-29 15:16:19.341891 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-08-29 15:16:19.341901 | orchestrator | Friday 29 August 2025 15:14:01 +0000 (0:00:08.153) 0:00:36.459 ********* 2025-08-29 15:16:19.341910 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-08-29 15:16:19.341919 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-08-29 15:16:19.341929 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-08-29 15:16:19.341938 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-08-29 15:16:19.341947 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-08-29 15:16:19.341957 | orchestrator | 2025-08-29 15:16:19.341966 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:16:19.341976 | orchestrator | Friday 29 August 2025 15:14:18 +0000 (0:00:16.807) 0:00:53.266 ********* 2025-08-29 15:16:19.342011 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:19.342074 | orchestrator | 2025-08-29 15:16:19.342084 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-08-29 15:16:19.342094 | orchestrator | Friday 29 August 2025 15:14:18 +0000 (0:00:00.574) 0:00:53.841 ********* 2025-08-29 15:16:19.342127 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request."} 2025-08-29 15:16:19.342141 | orchestrator | 2025-08-29 15:16:19.342151 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:16:19.342162 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-08-29 15:16:19.342174 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:16:19.342184 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:16:19.342208 | orchestrator | 2025-08-29 15:16:19.342218 | orchestrator | 2025-08-29 15:16:19.342227 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:16:19.342237 | orchestrator | Friday 29 August 2025 15:14:22 +0000 (0:00:03.471) 0:00:57.313 ********* 2025-08-29 15:16:19.342247 | orchestrator | =============================================================================== 2025-08-29 15:16:19.342256 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.81s 2025-08-29 15:16:19.342266 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.51s 2025-08-29 15:16:19.342276 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.15s 2025-08-29 15:16:19.342285 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.82s 2025-08-29 15:16:19.342295 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 4.26s 2025-08-29 15:16:19.342304 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.72s 2025-08-29 15:16:19.342314 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.47s 2025-08-29 15:16:19.342324 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.41s 2025-08-29 15:16:19.342333 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.58s 2025-08-29 15:16:19.342343 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.55s 2025-08-29 15:16:19.342352 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-08-29 15:16:19.342362 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-08-29 15:16:19.342371 | orchestrator | 2025-08-29 15:16:19.342381 | orchestrator | 2025-08-29 15:16:19.342391 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:16:19.342400 | orchestrator | 2025-08-29 15:16:19.342410 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:16:19.342420 | orchestrator | Friday 29 August 2025 15:11:57 +0000 (0:00:00.500) 0:00:00.500 ********* 2025-08-29 15:16:19.342429 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:19.342440 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:16:19.342449 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:16:19.342459 | orchestrator | 2025-08-29 15:16:19.342468 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:16:19.342485 | orchestrator | Friday 29 August 2025 15:11:58 +0000 (0:00:00.558) 0:00:01.059 ********* 2025-08-29 15:16:19.342496 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 15:16:19.342506 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 15:16:19.342515 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 15:16:19.342525 | orchestrator | 2025-08-29 15:16:19.342535 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-08-29 15:16:19.342544 | orchestrator | 2025-08-29 15:16:19.342554 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-08-29 15:16:19.342564 | orchestrator | Friday 29 August 2025 15:11:59 +0000 (0:00:01.130) 0:00:02.189 ********* 2025-08-29 15:16:19.342573 | orchestrator | 2025-08-29 15:16:19.342583 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-08-29 15:16:19.342592 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:19.342602 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:16:19.342612 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:16:19.342621 | orchestrator | 2025-08-29 15:16:19.342631 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:16:19.342641 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:16:19.342651 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:16:19.342667 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:16:19.342677 | orchestrator | 2025-08-29 15:16:19.342686 | orchestrator | 2025-08-29 15:16:19.342696 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:16:19.342706 | orchestrator | Friday 29 August 2025 15:14:51 +0000 (0:02:52.193) 0:02:54.382 ********* 2025-08-29 15:16:19.342715 | orchestrator | =============================================================================== 2025-08-29 15:16:19.342725 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 172.19s 2025-08-29 15:16:19.342734 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.13s 2025-08-29 15:16:19.342744 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.56s 2025-08-29 15:16:19.342755 | orchestrator | 2025-08-29 15:16:19.342772 | orchestrator | 2025-08-29 15:16:19.342788 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:16:19.342804 | orchestrator | 2025-08-29 15:16:19.342853 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:16:19.342870 | orchestrator | Friday 29 August 2025 15:14:00 +0000 (0:00:00.275) 0:00:00.275 ********* 2025-08-29 15:16:19.342886 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:19.342900 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:16:19.342913 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:16:19.342929 | orchestrator | 2025-08-29 15:16:19.342946 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:16:19.342959 | orchestrator | Friday 29 August 2025 15:14:01 +0000 (0:00:00.295) 0:00:00.570 ********* 2025-08-29 15:16:19.342973 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-08-29 15:16:19.343012 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-08-29 15:16:19.343027 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-08-29 15:16:19.343041 | orchestrator | 2025-08-29 15:16:19.343055 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-08-29 15:16:19.343069 | orchestrator | 2025-08-29 15:16:19.343083 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 15:16:19.343098 | orchestrator | Friday 29 August 2025 15:14:01 +0000 (0:00:00.359) 0:00:00.930 ********* 2025-08-29 15:16:19.343113 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:19.343129 | orchestrator | 2025-08-29 15:16:19.343184 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-08-29 15:16:19.343203 | orchestrator | Friday 29 August 2025 15:14:02 +0000 (0:00:00.519) 0:00:01.450 ********* 2025-08-29 15:16:19.343223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.343252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.343280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.343296 | orchestrator | 2025-08-29 15:16:19.343311 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-08-29 15:16:19.343326 | orchestrator | Friday 29 August 2025 15:14:02 +0000 (0:00:00.763) 0:00:02.213 ********* 2025-08-29 15:16:19.343342 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-08-29 15:16:19.343358 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-08-29 15:16:19.343375 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:16:19.343392 | orchestrator | 2025-08-29 15:16:19.343407 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 15:16:19.343422 | orchestrator | Friday 29 August 2025 15:14:03 +0000 (0:00:00.853) 0:00:03.066 ********* 2025-08-29 15:16:19.343437 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:19.343453 | orchestrator | 2025-08-29 15:16:19.343470 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-08-29 15:16:19.343480 | orchestrator | Friday 29 August 2025 15:14:04 +0000 (0:00:00.797) 0:00:03.864 ********* 2025-08-29 15:16:19.343505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.343517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.343528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.343547 | orchestrator | 2025-08-29 15:16:19.343557 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-08-29 15:16:19.343567 | orchestrator | Friday 29 August 2025 15:14:05 +0000 (0:00:01.410) 0:00:05.274 ********* 2025-08-29 15:16:19.343583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:19.343594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:19.343604 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:19.343613 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:19.343623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:19.343634 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:19.343643 | orchestrator | 2025-08-29 15:16:19.343659 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-08-29 15:16:19.343669 | orchestrator | Friday 29 August 2025 15:14:06 +0000 (0:00:00.373) 0:00:05.648 ********* 2025-08-29 15:16:19.343678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:19.343689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:19.343709 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:19.343719 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:19.343730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:19.343740 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:19.343749 | orchestrator | 2025-08-29 15:16:19.343764 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-08-29 15:16:19.343774 | orchestrator | Friday 29 August 2025 15:14:07 +0000 (0:00:00.863) 0:00:06.511 ********* 2025-08-29 15:16:19.343784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.343794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.343812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.343823 | orchestrator | 2025-08-29 15:16:19.343832 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-08-29 15:16:19.343842 | orchestrator | Friday 29 August 2025 15:14:08 +0000 (0:00:01.174) 0:00:07.686 ********* 2025-08-29 15:16:19.343852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.343869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.343883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.343894 | orchestrator | 2025-08-29 15:16:19.343903 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-08-29 15:16:19.343913 | orchestrator | Friday 29 August 2025 15:14:09 +0000 (0:00:01.574) 0:00:09.261 ********* 2025-08-29 15:16:19.343923 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:19.343932 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:19.343942 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:19.343951 | orchestrator | 2025-08-29 15:16:19.343961 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-08-29 15:16:19.343971 | orchestrator | Friday 29 August 2025 15:14:10 +0000 (0:00:00.780) 0:00:10.041 ********* 2025-08-29 15:16:19.344007 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 15:16:19.344019 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 15:16:19.344028 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 15:16:19.344038 | orchestrator | 2025-08-29 15:16:19.344047 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-08-29 15:16:19.344057 | orchestrator | Friday 29 August 2025 15:14:12 +0000 (0:00:01.460) 0:00:11.502 ********* 2025-08-29 15:16:19.344066 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 15:16:19.344076 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 15:16:19.344086 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 15:16:19.344095 | orchestrator | 2025-08-29 15:16:19.344105 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-08-29 15:16:19.344114 | orchestrator | Friday 29 August 2025 15:14:13 +0000 (0:00:01.528) 0:00:13.031 ********* 2025-08-29 15:16:19.344124 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:16:19.344134 | orchestrator | 2025-08-29 15:16:19.344144 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-08-29 15:16:19.344159 | orchestrator | Friday 29 August 2025 15:14:14 +0000 (0:00:00.956) 0:00:13.988 ********* 2025-08-29 15:16:19.344169 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-08-29 15:16:19.344190 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-08-29 15:16:19.344206 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:19.344223 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:16:19.344238 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:16:19.344253 | orchestrator | 2025-08-29 15:16:19.344269 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-08-29 15:16:19.344285 | orchestrator | Friday 29 August 2025 15:14:15 +0000 (0:00:00.909) 0:00:14.898 ********* 2025-08-29 15:16:19.344301 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:19.344317 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:19.344331 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:19.344348 | orchestrator | 2025-08-29 15:16:19.344365 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-08-29 15:16:19.344378 | orchestrator | Friday 29 August 2025 15:14:16 +0000 (0:00:00.768) 0:00:15.666 ********* 2025-08-29 15:16:19.344388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096657, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2300603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096657, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2300603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096657, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2300603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096765, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.286119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096765, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.286119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096765, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.286119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096699, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2480607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096699, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2480607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096699, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2480607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096766, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2890613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096766, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2890613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096766, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2890613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096712, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2520607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096712, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2520607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096712, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2520607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096762, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2850614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096762, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2850614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096762, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2850614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096652, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2297895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096652, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2297895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096652, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2297895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096661, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2311206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096661, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2311206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096661, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2311206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096701, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2480607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096701, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2480607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096701, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2480607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096717, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2810612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096717, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2810612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096717, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2810612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096764, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2858984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096764, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2858984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096764, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2858984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096662, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2470605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096662, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2470605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096662, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2470605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096761, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2830613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096761, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2830613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096761, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2830613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096715, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2520607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096715, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2520607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096715, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2520607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1096709, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2510607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1096709, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2510607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.344979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1096709, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2510607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096707, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2501242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096707, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2501242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096707, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2501242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096760, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2830613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096760, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2830613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096760, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2830613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096702, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2490606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096702, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2490606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096702, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2490606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096763, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2850614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096763, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2850614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096763, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2850614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096807, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.329631, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096807, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.329631, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096807, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.329631, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096775, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3000615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096775, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3000615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096775, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3000615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096772, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2906034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096772, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2906034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096772, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2906034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096782, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3060617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096782, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3060617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096782, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3060617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096769, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2895906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096769, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2895906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096769, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2895906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096787, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.320062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096787, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.320062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096787, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.320062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096784, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3130617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096784, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3130617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096784, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3130617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096788, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3210618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096788, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3210618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096788, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3210618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096804, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.328062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096804, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.328062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096804, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.328062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096786, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3150618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096786, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3150618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096786, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3150618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096777, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.301691, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096777, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.301691, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096777, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.301691, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096774, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2950613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096774, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2950613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096774, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2950613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096776, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3010616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096776, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3010616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096776, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3010616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096773, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2920613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096773, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2920613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096773, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2920613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096779, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3020616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096779, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3020616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096779, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3020616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096798, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.327062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.345970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096798, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.327062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096791, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3240707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096798, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.327062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096791, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3240707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096770, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2898731, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096791, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3240707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096770, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2898731, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096771, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2900615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096770, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2898731, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096771, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2900615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096785, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3140619, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096785, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3140619, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096771, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.2900615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096789, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.322062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096789, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.322062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096785, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.3140619, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096789, 'dev': 111, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477169.322062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:19.346336 | orchestrator | 2025-08-29 15:16:19.346349 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-08-29 15:16:19.346364 | orchestrator | Friday 29 August 2025 15:14:55 +0000 (0:00:39.197) 0:00:54.864 ********* 2025-08-29 15:16:19.346377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.346396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.346411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:19.346425 | orchestrator | 2025-08-29 15:16:19.346439 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-08-29 15:16:19.346452 | orchestrator | Friday 29 August 2025 15:14:56 +0000 (0:00:01.065) 0:00:55.929 ********* 2025-08-29 15:16:19.346465 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:19.346480 | orchestrator | 2025-08-29 15:16:19.346493 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-08-29 15:16:19.346505 | orchestrator | Friday 29 August 2025 15:14:58 +0000 (0:00:02.411) 0:00:58.340 ********* 2025-08-29 15:16:19.346518 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:19.346530 | orchestrator | 2025-08-29 15:16:19.346543 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 15:16:19.346555 | orchestrator | Friday 29 August 2025 15:15:01 +0000 (0:00:02.334) 0:01:00.675 ********* 2025-08-29 15:16:19.346567 | orchestrator | 2025-08-29 15:16:19.346580 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 15:16:19.346601 | orchestrator | Friday 29 August 2025 15:15:01 +0000 (0:00:00.080) 0:01:00.755 ********* 2025-08-29 15:16:19.346614 | orchestrator | 2025-08-29 15:16:19.346626 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 15:16:19.346648 | orchestrator | Friday 29 August 2025 15:15:01 +0000 (0:00:00.082) 0:01:00.838 ********* 2025-08-29 15:16:19.346661 | orchestrator | 2025-08-29 15:16:19.346672 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-08-29 15:16:19.346684 | orchestrator | Friday 29 August 2025 15:15:01 +0000 (0:00:00.309) 0:01:01.148 ********* 2025-08-29 15:16:19.346695 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:19.346707 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:19.346719 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:19.346732 | orchestrator | 2025-08-29 15:16:19.346745 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-08-29 15:16:19.346758 | orchestrator | Friday 29 August 2025 15:15:03 +0000 (0:00:02.012) 0:01:03.161 ********* 2025-08-29 15:16:19.346771 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:19.346783 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:19.346797 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-08-29 15:16:19.346813 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-08-29 15:16:19.346827 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-08-29 15:16:19.346842 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:19.346854 | orchestrator | 2025-08-29 15:16:19.346868 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-08-29 15:16:19.346882 | orchestrator | Friday 29 August 2025 15:15:42 +0000 (0:00:38.702) 0:01:41.864 ********* 2025-08-29 15:16:19.346894 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:19.346907 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:19.346918 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:19.346932 | orchestrator | 2025-08-29 15:16:19.346947 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-08-29 15:16:19.346961 | orchestrator | Friday 29 August 2025 15:16:12 +0000 (0:00:30.247) 0:02:12.111 ********* 2025-08-29 15:16:19.346975 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:19.347016 | orchestrator | 2025-08-29 15:16:19.347031 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-08-29 15:16:19.347045 | orchestrator | Friday 29 August 2025 15:16:14 +0000 (0:00:02.114) 0:02:14.225 ********* 2025-08-29 15:16:19.347058 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:19.347070 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:19.347082 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:19.347095 | orchestrator | 2025-08-29 15:16:19.347107 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-08-29 15:16:19.347120 | orchestrator | Friday 29 August 2025 15:16:15 +0000 (0:00:00.671) 0:02:14.897 ********* 2025-08-29 15:16:19.347134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-08-29 15:16:19.347151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-08-29 15:16:19.347167 | orchestrator | 2025-08-29 15:16:19.347180 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-08-29 15:16:19.347192 | orchestrator | Friday 29 August 2025 15:16:17 +0000 (0:00:02.356) 0:02:17.253 ********* 2025-08-29 15:16:19.347216 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:19.347229 | orchestrator | 2025-08-29 15:16:19.347241 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:16:19.347253 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:16:19.347268 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:16:19.347281 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:16:19.347293 | orchestrator | 2025-08-29 15:16:19.347306 | orchestrator | 2025-08-29 15:16:19.347318 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:16:19.347330 | orchestrator | Friday 29 August 2025 15:16:18 +0000 (0:00:00.256) 0:02:17.509 ********* 2025-08-29 15:16:19.347343 | orchestrator | =============================================================================== 2025-08-29 15:16:19.347356 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.20s 2025-08-29 15:16:19.347369 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.70s 2025-08-29 15:16:19.347383 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.25s 2025-08-29 15:16:19.347396 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.41s 2025-08-29 15:16:19.347411 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.36s 2025-08-29 15:16:19.347425 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.33s 2025-08-29 15:16:19.347438 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.11s 2025-08-29 15:16:19.347462 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.01s 2025-08-29 15:16:19.347475 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.57s 2025-08-29 15:16:19.347487 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.53s 2025-08-29 15:16:19.347500 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.46s 2025-08-29 15:16:19.347513 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.41s 2025-08-29 15:16:19.347525 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.17s 2025-08-29 15:16:19.347538 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.07s 2025-08-29 15:16:19.347550 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.96s 2025-08-29 15:16:19.347562 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.91s 2025-08-29 15:16:19.347575 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.86s 2025-08-29 15:16:19.347588 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.85s 2025-08-29 15:16:19.347600 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.80s 2025-08-29 15:16:19.347613 | orchestrator | grafana : Copying over extra configuration file ------------------------- 0.78s 2025-08-29 15:16:19.347687 | orchestrator | 2025-08-29 15:16:19 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:19.347706 | orchestrator | 2025-08-29 15:16:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:22.382305 | orchestrator | 2025-08-29 15:16:22 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:22.382406 | orchestrator | 2025-08-29 15:16:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:25.425384 | orchestrator | 2025-08-29 15:16:25 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:25.425477 | orchestrator | 2025-08-29 15:16:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:28.464699 | orchestrator | 2025-08-29 15:16:28 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:28.464788 | orchestrator | 2025-08-29 15:16:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:31.503781 | orchestrator | 2025-08-29 15:16:31 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:31.503947 | orchestrator | 2025-08-29 15:16:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:34.566942 | orchestrator | 2025-08-29 15:16:34 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:34.567301 | orchestrator | 2025-08-29 15:16:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:37.611396 | orchestrator | 2025-08-29 15:16:37 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:37.611498 | orchestrator | 2025-08-29 15:16:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:40.652758 | orchestrator | 2025-08-29 15:16:40 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:40.652882 | orchestrator | 2025-08-29 15:16:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:43.700376 | orchestrator | 2025-08-29 15:16:43 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:43.700465 | orchestrator | 2025-08-29 15:16:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:46.750448 | orchestrator | 2025-08-29 15:16:46 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:46.750543 | orchestrator | 2025-08-29 15:16:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:49.793989 | orchestrator | 2025-08-29 15:16:49 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:49.794195 | orchestrator | 2025-08-29 15:16:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:52.839045 | orchestrator | 2025-08-29 15:16:52 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:52.839162 | orchestrator | 2025-08-29 15:16:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:55.879184 | orchestrator | 2025-08-29 15:16:55 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:55.879270 | orchestrator | 2025-08-29 15:16:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:58.924483 | orchestrator | 2025-08-29 15:16:58 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:16:58.924578 | orchestrator | 2025-08-29 15:16:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:01.974082 | orchestrator | 2025-08-29 15:17:01 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:01.974180 | orchestrator | 2025-08-29 15:17:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:05.020302 | orchestrator | 2025-08-29 15:17:05 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:05.020372 | orchestrator | 2025-08-29 15:17:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:08.057600 | orchestrator | 2025-08-29 15:17:08 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:08.057707 | orchestrator | 2025-08-29 15:17:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:11.093513 | orchestrator | 2025-08-29 15:17:11 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:11.093619 | orchestrator | 2025-08-29 15:17:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:14.140611 | orchestrator | 2025-08-29 15:17:14 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:14.140718 | orchestrator | 2025-08-29 15:17:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:17.186452 | orchestrator | 2025-08-29 15:17:17 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:17.186562 | orchestrator | 2025-08-29 15:17:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:20.247850 | orchestrator | 2025-08-29 15:17:20 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:20.249922 | orchestrator | 2025-08-29 15:17:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:23.293570 | orchestrator | 2025-08-29 15:17:23 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:23.293665 | orchestrator | 2025-08-29 15:17:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:26.336746 | orchestrator | 2025-08-29 15:17:26 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:26.336876 | orchestrator | 2025-08-29 15:17:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:29.385702 | orchestrator | 2025-08-29 15:17:29 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:29.385803 | orchestrator | 2025-08-29 15:17:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:32.430345 | orchestrator | 2025-08-29 15:17:32 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:32.430498 | orchestrator | 2025-08-29 15:17:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:35.479534 | orchestrator | 2025-08-29 15:17:35 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:35.479643 | orchestrator | 2025-08-29 15:17:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:38.524461 | orchestrator | 2025-08-29 15:17:38 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:38.524567 | orchestrator | 2025-08-29 15:17:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:41.562436 | orchestrator | 2025-08-29 15:17:41 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:41.562541 | orchestrator | 2025-08-29 15:17:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:44.605465 | orchestrator | 2025-08-29 15:17:44 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:44.605574 | orchestrator | 2025-08-29 15:17:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:47.656005 | orchestrator | 2025-08-29 15:17:47 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:47.656099 | orchestrator | 2025-08-29 15:17:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:50.703870 | orchestrator | 2025-08-29 15:17:50 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:50.703979 | orchestrator | 2025-08-29 15:17:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:53.745368 | orchestrator | 2025-08-29 15:17:53 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:53.745444 | orchestrator | 2025-08-29 15:17:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:56.786580 | orchestrator | 2025-08-29 15:17:56 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:56.786746 | orchestrator | 2025-08-29 15:17:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:59.831712 | orchestrator | 2025-08-29 15:17:59 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:17:59.831790 | orchestrator | 2025-08-29 15:17:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:02.868844 | orchestrator | 2025-08-29 15:18:02 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:02.868977 | orchestrator | 2025-08-29 15:18:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:05.925252 | orchestrator | 2025-08-29 15:18:05 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:05.925341 | orchestrator | 2025-08-29 15:18:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:08.979868 | orchestrator | 2025-08-29 15:18:08 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:08.980040 | orchestrator | 2025-08-29 15:18:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:12.027996 | orchestrator | 2025-08-29 15:18:12 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:12.028105 | orchestrator | 2025-08-29 15:18:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:15.060316 | orchestrator | 2025-08-29 15:18:15 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:15.060401 | orchestrator | 2025-08-29 15:18:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:18.102778 | orchestrator | 2025-08-29 15:18:18 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:18.102861 | orchestrator | 2025-08-29 15:18:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:21.143633 | orchestrator | 2025-08-29 15:18:21 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:21.143728 | orchestrator | 2025-08-29 15:18:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:24.192705 | orchestrator | 2025-08-29 15:18:24 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:24.192824 | orchestrator | 2025-08-29 15:18:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:27.236751 | orchestrator | 2025-08-29 15:18:27 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:27.236929 | orchestrator | 2025-08-29 15:18:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:30.281620 | orchestrator | 2025-08-29 15:18:30 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:30.281743 | orchestrator | 2025-08-29 15:18:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:33.325516 | orchestrator | 2025-08-29 15:18:33 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:33.325615 | orchestrator | 2025-08-29 15:18:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:36.369527 | orchestrator | 2025-08-29 15:18:36 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:36.369695 | orchestrator | 2025-08-29 15:18:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:39.415497 | orchestrator | 2025-08-29 15:18:39 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:39.415614 | orchestrator | 2025-08-29 15:18:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:42.465402 | orchestrator | 2025-08-29 15:18:42 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:42.465484 | orchestrator | 2025-08-29 15:18:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:45.507088 | orchestrator | 2025-08-29 15:18:45 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:45.507173 | orchestrator | 2025-08-29 15:18:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:48.553689 | orchestrator | 2025-08-29 15:18:48 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:48.553787 | orchestrator | 2025-08-29 15:18:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:51.594553 | orchestrator | 2025-08-29 15:18:51 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:51.594659 | orchestrator | 2025-08-29 15:18:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:54.632085 | orchestrator | 2025-08-29 15:18:54 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:54.632175 | orchestrator | 2025-08-29 15:18:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:57.665905 | orchestrator | 2025-08-29 15:18:57 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:18:57.666009 | orchestrator | 2025-08-29 15:18:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:00.703582 | orchestrator | 2025-08-29 15:19:00 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:00.703712 | orchestrator | 2025-08-29 15:19:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:03.743169 | orchestrator | 2025-08-29 15:19:03 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:03.743336 | orchestrator | 2025-08-29 15:19:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:06.775969 | orchestrator | 2025-08-29 15:19:06 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:06.776078 | orchestrator | 2025-08-29 15:19:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:09.827111 | orchestrator | 2025-08-29 15:19:09 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:09.827219 | orchestrator | 2025-08-29 15:19:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:12.876835 | orchestrator | 2025-08-29 15:19:12 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:12.876914 | orchestrator | 2025-08-29 15:19:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:15.915663 | orchestrator | 2025-08-29 15:19:15 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:15.915756 | orchestrator | 2025-08-29 15:19:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:18.962489 | orchestrator | 2025-08-29 15:19:18 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:18.962611 | orchestrator | 2025-08-29 15:19:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:22.015451 | orchestrator | 2025-08-29 15:19:22 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:22.015561 | orchestrator | 2025-08-29 15:19:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:25.051965 | orchestrator | 2025-08-29 15:19:25 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:25.052050 | orchestrator | 2025-08-29 15:19:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:28.093895 | orchestrator | 2025-08-29 15:19:28 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:28.094145 | orchestrator | 2025-08-29 15:19:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:31.133960 | orchestrator | 2025-08-29 15:19:31 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:31.134061 | orchestrator | 2025-08-29 15:19:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:34.177946 | orchestrator | 2025-08-29 15:19:34 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:34.178109 | orchestrator | 2025-08-29 15:19:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:37.228859 | orchestrator | 2025-08-29 15:19:37 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:37.229112 | orchestrator | 2025-08-29 15:19:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:40.291662 | orchestrator | 2025-08-29 15:19:40 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state STARTED 2025-08-29 15:19:40.291927 | orchestrator | 2025-08-29 15:19:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:43.340307 | orchestrator | 2025-08-29 15:19:43 | INFO  | Task 5e501805-f0f5-4ae8-bc10-02e15d0cf22d is in state SUCCESS 2025-08-29 15:19:43.343463 | orchestrator | 2025-08-29 15:19:43.343548 | orchestrator | 2025-08-29 15:19:43.343562 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:19:43.343573 | orchestrator | 2025-08-29 15:19:43.343583 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-08-29 15:19:43.343593 | orchestrator | Friday 29 August 2025 15:10:26 +0000 (0:00:00.804) 0:00:00.804 ********* 2025-08-29 15:19:43.343603 | orchestrator | changed: [testbed-manager] 2025-08-29 15:19:43.343614 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.343623 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:43.343633 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:43.343643 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:19:43.343652 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:19:43.343661 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:19:43.343713 | orchestrator | 2025-08-29 15:19:43.343784 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:19:43.343795 | orchestrator | Friday 29 August 2025 15:10:28 +0000 (0:00:01.657) 0:00:02.461 ********* 2025-08-29 15:19:43.343805 | orchestrator | changed: [testbed-manager] 2025-08-29 15:19:43.343815 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.343825 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:43.343835 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:43.343844 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:19:43.343942 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:19:43.343952 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:19:43.343961 | orchestrator | 2025-08-29 15:19:43.343971 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:19:43.343981 | orchestrator | Friday 29 August 2025 15:10:29 +0000 (0:00:00.906) 0:00:03.368 ********* 2025-08-29 15:19:43.343992 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-08-29 15:19:43.344003 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 15:19:43.344014 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 15:19:43.344025 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 15:19:43.344036 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-08-29 15:19:43.344047 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-08-29 15:19:43.344057 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-08-29 15:19:43.344071 | orchestrator | 2025-08-29 15:19:43.344087 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-08-29 15:19:43.344104 | orchestrator | 2025-08-29 15:19:43.344120 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 15:19:43.344196 | orchestrator | Friday 29 August 2025 15:10:30 +0000 (0:00:01.130) 0:00:04.498 ********* 2025-08-29 15:19:43.344215 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:19:43.344232 | orchestrator | 2025-08-29 15:19:43.344248 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-08-29 15:19:43.344264 | orchestrator | Friday 29 August 2025 15:10:31 +0000 (0:00:00.942) 0:00:05.440 ********* 2025-08-29 15:19:43.344297 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-08-29 15:19:43.344315 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-08-29 15:19:43.344356 | orchestrator | 2025-08-29 15:19:43.344374 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-08-29 15:19:43.344388 | orchestrator | Friday 29 August 2025 15:10:35 +0000 (0:00:04.254) 0:00:09.695 ********* 2025-08-29 15:19:43.344403 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:19:43.344417 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:19:43.344431 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.344446 | orchestrator | 2025-08-29 15:19:43.344461 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 15:19:43.344477 | orchestrator | Friday 29 August 2025 15:10:40 +0000 (0:00:04.441) 0:00:14.136 ********* 2025-08-29 15:19:43.344492 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.344508 | orchestrator | 2025-08-29 15:19:43.344525 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-08-29 15:19:43.344570 | orchestrator | Friday 29 August 2025 15:10:40 +0000 (0:00:00.723) 0:00:14.859 ********* 2025-08-29 15:19:43.344587 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.344603 | orchestrator | 2025-08-29 15:19:43.344694 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-08-29 15:19:43.344710 | orchestrator | Friday 29 August 2025 15:10:43 +0000 (0:00:02.213) 0:00:17.072 ********* 2025-08-29 15:19:43.344726 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.344741 | orchestrator | 2025-08-29 15:19:43.344773 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:19:43.344790 | orchestrator | Friday 29 August 2025 15:10:50 +0000 (0:00:07.324) 0:00:24.397 ********* 2025-08-29 15:19:43.344805 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.344821 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.344838 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.344854 | orchestrator | 2025-08-29 15:19:43.344892 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 15:19:43.344911 | orchestrator | Friday 29 August 2025 15:10:51 +0000 (0:00:00.555) 0:00:24.952 ********* 2025-08-29 15:19:43.344925 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:43.344935 | orchestrator | 2025-08-29 15:19:43.344944 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-08-29 15:19:43.344954 | orchestrator | Friday 29 August 2025 15:11:22 +0000 (0:00:31.897) 0:00:56.849 ********* 2025-08-29 15:19:43.344964 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.344973 | orchestrator | 2025-08-29 15:19:43.344982 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 15:19:43.344992 | orchestrator | Friday 29 August 2025 15:11:37 +0000 (0:00:14.737) 0:01:11.586 ********* 2025-08-29 15:19:43.345001 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:43.345022 | orchestrator | 2025-08-29 15:19:43.345032 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 15:19:43.345042 | orchestrator | Friday 29 August 2025 15:11:50 +0000 (0:00:13.056) 0:01:24.643 ********* 2025-08-29 15:19:43.345070 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:43.345081 | orchestrator | 2025-08-29 15:19:43.345090 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-08-29 15:19:43.345100 | orchestrator | Friday 29 August 2025 15:11:53 +0000 (0:00:02.659) 0:01:27.302 ********* 2025-08-29 15:19:43.345110 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.345133 | orchestrator | 2025-08-29 15:19:43.345143 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:19:43.345153 | orchestrator | Friday 29 August 2025 15:11:55 +0000 (0:00:01.651) 0:01:28.954 ********* 2025-08-29 15:19:43.345164 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:19:43.345174 | orchestrator | 2025-08-29 15:19:43.345184 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 15:19:43.345193 | orchestrator | Friday 29 August 2025 15:11:56 +0000 (0:00:01.886) 0:01:30.841 ********* 2025-08-29 15:19:43.345203 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:43.345212 | orchestrator | 2025-08-29 15:19:43.345222 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 15:19:43.345232 | orchestrator | Friday 29 August 2025 15:12:14 +0000 (0:00:17.785) 0:01:48.626 ********* 2025-08-29 15:19:43.345241 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.345251 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.345260 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.345270 | orchestrator | 2025-08-29 15:19:43.345280 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-08-29 15:19:43.345290 | orchestrator | 2025-08-29 15:19:43.345300 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 15:19:43.345309 | orchestrator | Friday 29 August 2025 15:12:15 +0000 (0:00:00.344) 0:01:48.971 ********* 2025-08-29 15:19:43.345319 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:19:43.345351 | orchestrator | 2025-08-29 15:19:43.345369 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-08-29 15:19:43.345383 | orchestrator | Friday 29 August 2025 15:12:15 +0000 (0:00:00.722) 0:01:49.694 ********* 2025-08-29 15:19:43.345394 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.345405 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.345416 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.345432 | orchestrator | 2025-08-29 15:19:43.345448 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-08-29 15:19:43.345464 | orchestrator | Friday 29 August 2025 15:12:17 +0000 (0:00:02.069) 0:01:51.763 ********* 2025-08-29 15:19:43.345480 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.345496 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.345510 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.345525 | orchestrator | 2025-08-29 15:19:43.345540 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 15:19:43.345556 | orchestrator | Friday 29 August 2025 15:12:19 +0000 (0:00:02.069) 0:01:53.833 ********* 2025-08-29 15:19:43.345571 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.345587 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.345603 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.345620 | orchestrator | 2025-08-29 15:19:43.345635 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 15:19:43.345653 | orchestrator | Friday 29 August 2025 15:12:20 +0000 (0:00:00.412) 0:01:54.246 ********* 2025-08-29 15:19:43.345666 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:19:43.345676 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.345687 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:19:43.345704 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.345720 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 15:19:43.345735 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-08-29 15:19:43.345751 | orchestrator | 2025-08-29 15:19:43.345768 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 15:19:43.345783 | orchestrator | Friday 29 August 2025 15:12:29 +0000 (0:00:08.907) 0:02:03.154 ********* 2025-08-29 15:19:43.345801 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.345831 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.345848 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.345860 | orchestrator | 2025-08-29 15:19:43.345869 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 15:19:43.345879 | orchestrator | Friday 29 August 2025 15:12:29 +0000 (0:00:00.419) 0:02:03.573 ********* 2025-08-29 15:19:43.345897 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 15:19:43.345907 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.345916 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:19:43.345926 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.345935 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:19:43.345945 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.345954 | orchestrator | 2025-08-29 15:19:43.345964 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 15:19:43.345973 | orchestrator | Friday 29 August 2025 15:12:30 +0000 (0:00:00.682) 0:02:04.255 ********* 2025-08-29 15:19:43.345982 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.345992 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.346001 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.346011 | orchestrator | 2025-08-29 15:19:43.346071 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-08-29 15:19:43.346082 | orchestrator | Friday 29 August 2025 15:12:30 +0000 (0:00:00.541) 0:02:04.797 ********* 2025-08-29 15:19:43.346091 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.346101 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.346110 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.346120 | orchestrator | 2025-08-29 15:19:43.346129 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-08-29 15:19:43.346139 | orchestrator | Friday 29 August 2025 15:12:31 +0000 (0:00:01.024) 0:02:05.821 ********* 2025-08-29 15:19:43.346149 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.346158 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.346180 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.346190 | orchestrator | 2025-08-29 15:19:43.346199 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-08-29 15:19:43.346209 | orchestrator | Friday 29 August 2025 15:12:34 +0000 (0:00:02.240) 0:02:08.061 ********* 2025-08-29 15:19:43.346219 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.346228 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.346238 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:43.346247 | orchestrator | 2025-08-29 15:19:43.346257 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 15:19:43.346266 | orchestrator | Friday 29 August 2025 15:12:56 +0000 (0:00:22.261) 0:02:30.323 ********* 2025-08-29 15:19:43.346276 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.346285 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.346295 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:43.346305 | orchestrator | 2025-08-29 15:19:43.346314 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 15:19:43.346324 | orchestrator | Friday 29 August 2025 15:13:09 +0000 (0:00:13.099) 0:02:43.423 ********* 2025-08-29 15:19:43.346395 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.346406 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:43.346416 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.346426 | orchestrator | 2025-08-29 15:19:43.346436 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-08-29 15:19:43.346445 | orchestrator | Friday 29 August 2025 15:13:11 +0000 (0:00:02.054) 0:02:45.477 ********* 2025-08-29 15:19:43.346455 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.346465 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.346474 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.346484 | orchestrator | 2025-08-29 15:19:43.346493 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-08-29 15:19:43.346512 | orchestrator | Friday 29 August 2025 15:13:25 +0000 (0:00:13.684) 0:02:59.162 ********* 2025-08-29 15:19:43.346522 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.346532 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.346541 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.346551 | orchestrator | 2025-08-29 15:19:43.346560 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 15:19:43.346570 | orchestrator | Friday 29 August 2025 15:13:26 +0000 (0:00:01.099) 0:03:00.261 ********* 2025-08-29 15:19:43.346579 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.346589 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.346598 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.346608 | orchestrator | 2025-08-29 15:19:43.346617 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-08-29 15:19:43.346627 | orchestrator | 2025-08-29 15:19:43.346636 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:19:43.346646 | orchestrator | Friday 29 August 2025 15:13:26 +0000 (0:00:00.443) 0:03:00.705 ********* 2025-08-29 15:19:43.346656 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:19:43.346683 | orchestrator | 2025-08-29 15:19:43.346691 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-08-29 15:19:43.346699 | orchestrator | Friday 29 August 2025 15:13:27 +0000 (0:00:00.551) 0:03:01.256 ********* 2025-08-29 15:19:43.346707 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-08-29 15:19:43.346715 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-08-29 15:19:43.346723 | orchestrator | 2025-08-29 15:19:43.346731 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-08-29 15:19:43.346739 | orchestrator | Friday 29 August 2025 15:13:30 +0000 (0:00:03.476) 0:03:04.733 ********* 2025-08-29 15:19:43.346747 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-08-29 15:19:43.346757 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-08-29 15:19:43.346765 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-08-29 15:19:43.346773 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-08-29 15:19:43.346781 | orchestrator | 2025-08-29 15:19:43.346789 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-08-29 15:19:43.346802 | orchestrator | Friday 29 August 2025 15:13:37 +0000 (0:00:06.803) 0:03:11.536 ********* 2025-08-29 15:19:43.346810 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:19:43.346818 | orchestrator | 2025-08-29 15:19:43.346825 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-08-29 15:19:43.346833 | orchestrator | Friday 29 August 2025 15:13:40 +0000 (0:00:03.307) 0:03:14.843 ********* 2025-08-29 15:19:43.346841 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:19:43.346849 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-08-29 15:19:43.346857 | orchestrator | 2025-08-29 15:19:43.346865 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-08-29 15:19:43.346872 | orchestrator | Friday 29 August 2025 15:13:44 +0000 (0:00:03.983) 0:03:18.827 ********* 2025-08-29 15:19:43.346880 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:19:43.346888 | orchestrator | 2025-08-29 15:19:43.346895 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-08-29 15:19:43.346903 | orchestrator | Friday 29 August 2025 15:13:48 +0000 (0:00:03.882) 0:03:22.710 ********* 2025-08-29 15:19:43.346911 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-08-29 15:19:43.346919 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-08-29 15:19:43.346933 | orchestrator | 2025-08-29 15:19:43.346941 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 15:19:43.346965 | orchestrator | Friday 29 August 2025 15:13:57 +0000 (0:00:08.679) 0:03:31.390 ********* 2025-08-29 15:19:43.346990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.347003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.347018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.347054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.347093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.347102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.347110 | orchestrator | 2025-08-29 15:19:43.347118 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-08-29 15:19:43.347126 | orchestrator | Friday 29 August 2025 15:13:58 +0000 (0:00:01.416) 0:03:32.806 ********* 2025-08-29 15:19:43.347134 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.347142 | orchestrator | 2025-08-29 15:19:43.347150 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-08-29 15:19:43.347158 | orchestrator | Friday 29 August 2025 15:13:59 +0000 (0:00:00.176) 0:03:32.983 ********* 2025-08-29 15:19:43.347166 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.347174 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.347181 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.347189 | orchestrator | 2025-08-29 15:19:43.347197 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-08-29 15:19:43.347205 | orchestrator | Friday 29 August 2025 15:13:59 +0000 (0:00:00.367) 0:03:33.350 ********* 2025-08-29 15:19:43.347213 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:19:43.347220 | orchestrator | 2025-08-29 15:19:43.347228 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-08-29 15:19:43.347236 | orchestrator | Friday 29 August 2025 15:14:00 +0000 (0:00:01.079) 0:03:34.430 ********* 2025-08-29 15:19:43.347244 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.347251 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.347259 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.347267 | orchestrator | 2025-08-29 15:19:43.347275 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:19:43.347283 | orchestrator | Friday 29 August 2025 15:14:00 +0000 (0:00:00.291) 0:03:34.721 ********* 2025-08-29 15:19:43.347290 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:19:43.347298 | orchestrator | 2025-08-29 15:19:43.347306 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 15:19:43.347314 | orchestrator | Friday 29 August 2025 15:14:01 +0000 (0:00:00.586) 0:03:35.307 ********* 2025-08-29 15:19:43.347327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.347364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.347374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.347384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.347422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.347436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.347444 | orchestrator | 2025-08-29 15:19:43.347453 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 15:19:43.347461 | orchestrator | Friday 29 August 2025 15:14:03 +0000 (0:00:02.375) 0:03:37.682 ********* 2025-08-29 15:19:43.347469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:19:43.347478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.347659 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.347694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:19:43.347711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.347719 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.347735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:19:43.347745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.347753 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.347761 | orchestrator | 2025-08-29 15:19:43.347769 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 15:19:43.347777 | orchestrator | Friday 29 August 2025 15:14:04 +0000 (0:00:00.730) 0:03:38.412 ********* 2025-08-29 15:19:43.347790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:19:43.347803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.347812 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.347828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:19:43.347837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.347845 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.347854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:19:43.347882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.347891 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.347899 | orchestrator | 2025-08-29 15:19:43.347907 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-08-29 15:19:43.347915 | orchestrator | Friday 29 August 2025 15:14:05 +0000 (0:00:00.896) 0:03:39.309 ********* 2025-08-29 15:19:43.347929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.347939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.347948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.347966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.347981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.347990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.347998 | orchestrator | 2025-08-29 15:19:43.348006 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-08-29 15:19:43.348014 | orchestrator | Friday 29 August 2025 15:14:07 +0000 (0:00:02.296) 0:03:41.605 ********* 2025-08-29 15:19:43.348022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.348053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.348080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.348090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.348099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.348114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.348122 | orchestrator | 2025-08-29 15:19:43.348130 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-08-29 15:19:43.348138 | orchestrator | Friday 29 August 2025 15:14:14 +0000 (0:00:06.684) 0:03:48.290 ********* 2025-08-29 15:19:43.348151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:19:43.348165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.348174 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.348183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:19:43.348197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.348205 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.348218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:19:43.348227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.348235 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.348243 | orchestrator | 2025-08-29 15:19:43.348251 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-08-29 15:19:43.348260 | orchestrator | Friday 29 August 2025 15:14:15 +0000 (0:00:00.852) 0:03:49.142 ********* 2025-08-29 15:19:43.348267 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:43.348275 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.348283 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:43.348291 | orchestrator | 2025-08-29 15:19:43.348304 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-08-29 15:19:43.348312 | orchestrator | Friday 29 August 2025 15:14:16 +0000 (0:00:01.709) 0:03:50.851 ********* 2025-08-29 15:19:43.348320 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.348344 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.348353 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.348361 | orchestrator | 2025-08-29 15:19:43.348369 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-08-29 15:19:43.348377 | orchestrator | Friday 29 August 2025 15:14:17 +0000 (0:00:00.342) 0:03:51.194 ********* 2025-08-29 15:19:43.348386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.348413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.348434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:43.348444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.348453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.348466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.348475 | orchestrator | 2025-08-29 15:19:43.348483 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 15:19:43.348491 | orchestrator | Friday 29 August 2025 15:14:19 +0000 (0:00:02.187) 0:03:53.381 ********* 2025-08-29 15:19:43.348499 | orchestrator | 2025-08-29 15:19:43.348508 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 15:19:43.348516 | orchestrator | Friday 29 August 2025 15:14:19 +0000 (0:00:00.143) 0:03:53.525 ********* 2025-08-29 15:19:43.348523 | orchestrator | 2025-08-29 15:19:43.348531 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 15:19:43.348540 | orchestrator | Friday 29 August 2025 15:14:19 +0000 (0:00:00.144) 0:03:53.670 ********* 2025-08-29 15:19:43.348547 | orchestrator | 2025-08-29 15:19:43.348555 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-08-29 15:19:43.348563 | orchestrator | Friday 29 August 2025 15:14:19 +0000 (0:00:00.185) 0:03:53.855 ********* 2025-08-29 15:19:43.348571 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.348579 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:43.348587 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:43.348595 | orchestrator | 2025-08-29 15:19:43.348603 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-08-29 15:19:43.348611 | orchestrator | Friday 29 August 2025 15:14:41 +0000 (0:00:21.986) 0:04:15.842 ********* 2025-08-29 15:19:43.348619 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.348627 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:43.348635 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:43.348643 | orchestrator | 2025-08-29 15:19:43.348651 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-08-29 15:19:43.348659 | orchestrator | 2025-08-29 15:19:43.348667 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:19:43.348675 | orchestrator | Friday 29 August 2025 15:14:48 +0000 (0:00:06.792) 0:04:22.635 ********* 2025-08-29 15:19:43.348698 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:19:43.348707 | orchestrator | 2025-08-29 15:19:43.348715 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:19:43.348724 | orchestrator | Friday 29 August 2025 15:14:50 +0000 (0:00:01.425) 0:04:24.060 ********* 2025-08-29 15:19:43.348731 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.348739 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.348747 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.348755 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.348763 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.348770 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.348778 | orchestrator | 2025-08-29 15:19:43.348795 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-08-29 15:19:43.348803 | orchestrator | Friday 29 August 2025 15:14:50 +0000 (0:00:00.671) 0:04:24.732 ********* 2025-08-29 15:19:43.348811 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.348819 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.348826 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.348834 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:19:43.348842 | orchestrator | 2025-08-29 15:19:43.348850 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 15:19:43.348863 | orchestrator | Friday 29 August 2025 15:14:52 +0000 (0:00:01.374) 0:04:26.106 ********* 2025-08-29 15:19:43.348871 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-08-29 15:19:43.348879 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-08-29 15:19:43.348887 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-08-29 15:19:43.348895 | orchestrator | 2025-08-29 15:19:43.348903 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 15:19:43.348911 | orchestrator | Friday 29 August 2025 15:14:52 +0000 (0:00:00.719) 0:04:26.825 ********* 2025-08-29 15:19:43.348919 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-08-29 15:19:43.348927 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-08-29 15:19:43.348934 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-08-29 15:19:43.348942 | orchestrator | 2025-08-29 15:19:43.348950 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 15:19:43.348958 | orchestrator | Friday 29 August 2025 15:14:54 +0000 (0:00:01.370) 0:04:28.196 ********* 2025-08-29 15:19:43.348965 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-08-29 15:19:43.348974 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.348981 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-08-29 15:19:43.348989 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.348997 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-08-29 15:19:43.349005 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.349013 | orchestrator | 2025-08-29 15:19:43.349021 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-08-29 15:19:43.349028 | orchestrator | Friday 29 August 2025 15:14:55 +0000 (0:00:00.834) 0:04:29.030 ********* 2025-08-29 15:19:43.349036 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 15:19:43.349044 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 15:19:43.349052 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.349060 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 15:19:43.349068 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 15:19:43.349076 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.349083 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 15:19:43.349091 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 15:19:43.349099 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 15:19:43.349107 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.349114 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 15:19:43.349122 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 15:19:43.349130 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 15:19:43.349138 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 15:19:43.349146 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 15:19:43.349170 | orchestrator | 2025-08-29 15:19:43.349179 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-08-29 15:19:43.349186 | orchestrator | Friday 29 August 2025 15:14:58 +0000 (0:00:03.150) 0:04:32.181 ********* 2025-08-29 15:19:43.349194 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.349202 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.349210 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.349218 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:19:43.349226 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:19:43.349234 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:19:43.349241 | orchestrator | 2025-08-29 15:19:43.349249 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-08-29 15:19:43.349257 | orchestrator | Friday 29 August 2025 15:14:59 +0000 (0:00:01.508) 0:04:33.689 ********* 2025-08-29 15:19:43.349265 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.349273 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.349280 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.349288 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:19:43.349296 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:19:43.349303 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:19:43.349311 | orchestrator | 2025-08-29 15:19:43.349352 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 15:19:43.349361 | orchestrator | Friday 29 August 2025 15:15:01 +0000 (0:00:01.709) 0:04:35.399 ********* 2025-08-29 15:19:43.349380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349397 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349406 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349484 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349562 | orchestrator | 2025-08-29 15:19:43.349570 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:19:43.349586 | orchestrator | Friday 29 August 2025 15:15:04 +0000 (0:00:02.753) 0:04:38.152 ********* 2025-08-29 15:19:43.349595 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:19:43.349614 | orchestrator | 2025-08-29 15:19:43.349623 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 15:19:43.349631 | orchestrator | Friday 29 August 2025 15:15:05 +0000 (0:00:01.397) 0:04:39.549 ********* 2025-08-29 15:19:43.349639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:19:43.349651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350187 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350263 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350272 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350395 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.350404 | orchestrator | 2025-08-29 15:19:43.350412 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 15:19:43.350421 | orchestrator | Friday 29 August 2025 15:15:09 +0000 (0:00:03.825) 0:04:43.375 ********* 2025-08-29 15:19:43.350435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:19:43.350450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:19:43.350459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.350467 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.350476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:19:43.350488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:19:43.350501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.350509 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.350523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:19:43.350531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:19:43.350540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.350548 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.350579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:19:43.350588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.350596 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.350611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:19:43.350637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.350646 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.350654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:19:43.350662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.350670 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.350678 | orchestrator | 2025-08-29 15:19:43.350686 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 15:19:43.350694 | orchestrator | Friday 29 August 2025 15:15:11 +0000 (0:00:01.829) 0:04:45.204 ********* 2025-08-29 15:19:43.350706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:19:43.350715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:19:43.350757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.350772 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.350780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:19:43.350788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:19:43.350795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.350803 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.350814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:19:43.350840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:19:43.350849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.350856 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.350864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:19:43.350871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:19:43.350879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.350890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.350919 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.350927 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.350949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:19:43.350962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.350972 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.350983 | orchestrator | 2025-08-29 15:19:43.350994 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:19:43.351006 | orchestrator | Friday 29 August 2025 15:15:13 +0000 (0:00:02.436) 0:04:47.640 ********* 2025-08-29 15:19:43.351017 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.351027 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.351038 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.351048 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:19:43.351059 | orchestrator | 2025-08-29 15:19:43.351068 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-08-29 15:19:43.351078 | orchestrator | Friday 29 August 2025 15:15:14 +0000 (0:00:01.130) 0:04:48.771 ********* 2025-08-29 15:19:43.351089 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:19:43.351100 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 15:19:43.351111 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 15:19:43.351122 | orchestrator | 2025-08-29 15:19:43.351133 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-08-29 15:19:43.351143 | orchestrator | Friday 29 August 2025 15:15:15 +0000 (0:00:01.044) 0:04:49.816 ********* 2025-08-29 15:19:43.351152 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:19:43.351164 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 15:19:43.351174 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 15:19:43.351185 | orchestrator | 2025-08-29 15:19:43.351196 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-08-29 15:19:43.351205 | orchestrator | Friday 29 August 2025 15:15:17 +0000 (0:00:01.108) 0:04:50.925 ********* 2025-08-29 15:19:43.351215 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:19:43.351224 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:19:43.351233 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:19:43.351243 | orchestrator | 2025-08-29 15:19:43.351255 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-08-29 15:19:43.351266 | orchestrator | Friday 29 August 2025 15:15:17 +0000 (0:00:00.535) 0:04:51.460 ********* 2025-08-29 15:19:43.351277 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:19:43.351288 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:19:43.351295 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:19:43.351301 | orchestrator | 2025-08-29 15:19:43.351308 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-08-29 15:19:43.351315 | orchestrator | Friday 29 August 2025 15:15:18 +0000 (0:00:01.099) 0:04:52.560 ********* 2025-08-29 15:19:43.351321 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 15:19:43.351348 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 15:19:43.351364 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 15:19:43.351371 | orchestrator | 2025-08-29 15:19:43.351378 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-08-29 15:19:43.351384 | orchestrator | Friday 29 August 2025 15:15:19 +0000 (0:00:01.216) 0:04:53.776 ********* 2025-08-29 15:19:43.351391 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 15:19:43.351397 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 15:19:43.351404 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 15:19:43.351411 | orchestrator | 2025-08-29 15:19:43.351417 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-08-29 15:19:43.351424 | orchestrator | Friday 29 August 2025 15:15:21 +0000 (0:00:01.280) 0:04:55.057 ********* 2025-08-29 15:19:43.351431 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 15:19:43.351437 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 15:19:43.351459 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 15:19:43.351467 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-08-29 15:19:43.351473 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-08-29 15:19:43.351480 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-08-29 15:19:43.351486 | orchestrator | 2025-08-29 15:19:43.351493 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-08-29 15:19:43.351499 | orchestrator | Friday 29 August 2025 15:15:25 +0000 (0:00:04.399) 0:04:59.456 ********* 2025-08-29 15:19:43.351506 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.351512 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.351519 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.351525 | orchestrator | 2025-08-29 15:19:43.351532 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-08-29 15:19:43.351539 | orchestrator | Friday 29 August 2025 15:15:26 +0000 (0:00:00.526) 0:04:59.983 ********* 2025-08-29 15:19:43.351545 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.351552 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.351558 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.351565 | orchestrator | 2025-08-29 15:19:43.351572 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-08-29 15:19:43.351578 | orchestrator | Friday 29 August 2025 15:15:26 +0000 (0:00:00.349) 0:05:00.332 ********* 2025-08-29 15:19:43.351585 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:19:43.351591 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:19:43.351598 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:19:43.351604 | orchestrator | 2025-08-29 15:19:43.351616 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-08-29 15:19:43.351623 | orchestrator | Friday 29 August 2025 15:15:27 +0000 (0:00:01.377) 0:05:01.709 ********* 2025-08-29 15:19:43.351631 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 15:19:43.351648 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 15:19:43.351655 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 15:19:43.351662 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 15:19:43.351669 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 15:19:43.351676 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 15:19:43.351696 | orchestrator | 2025-08-29 15:19:43.351703 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-08-29 15:19:43.351709 | orchestrator | Friday 29 August 2025 15:15:31 +0000 (0:00:03.718) 0:05:05.428 ********* 2025-08-29 15:19:43.351716 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:19:43.351723 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:19:43.351729 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:19:43.351736 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:19:43.351742 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:19:43.351749 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:19:43.351755 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:19:43.351762 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:19:43.351768 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:19:43.351775 | orchestrator | 2025-08-29 15:19:43.351781 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-08-29 15:19:43.351788 | orchestrator | Friday 29 August 2025 15:15:35 +0000 (0:00:03.935) 0:05:09.363 ********* 2025-08-29 15:19:43.351794 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.351801 | orchestrator | 2025-08-29 15:19:43.351807 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-08-29 15:19:43.351814 | orchestrator | Friday 29 August 2025 15:15:35 +0000 (0:00:00.132) 0:05:09.495 ********* 2025-08-29 15:19:43.351821 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.351827 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.351833 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.351840 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.351846 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.351853 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.351859 | orchestrator | 2025-08-29 15:19:43.351866 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-08-29 15:19:43.351872 | orchestrator | Friday 29 August 2025 15:15:36 +0000 (0:00:00.663) 0:05:10.159 ********* 2025-08-29 15:19:43.351879 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:19:43.351885 | orchestrator | 2025-08-29 15:19:43.351892 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-08-29 15:19:43.351898 | orchestrator | Friday 29 August 2025 15:15:37 +0000 (0:00:00.755) 0:05:10.914 ********* 2025-08-29 15:19:43.351905 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.351911 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.351918 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.351924 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.351931 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.351937 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.351944 | orchestrator | 2025-08-29 15:19:43.351950 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-08-29 15:19:43.351957 | orchestrator | Friday 29 August 2025 15:15:37 +0000 (0:00:00.942) 0:05:11.857 ********* 2025-08-29 15:19:43.351971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:19:43.351984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:19:43.351996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352028 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352045 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352052 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352073 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352094 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352105 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352113 | orchestrator | 2025-08-29 15:19:43.352119 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-08-29 15:19:43.352126 | orchestrator | Friday 29 August 2025 15:15:41 +0000 (0:00:03.825) 0:05:15.683 ********* 2025-08-29 15:19:43.352135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:19:43.352146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:19:43.352176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:19:43.352208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:19:43.352228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:19:43.352240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:19:43.352253 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.352471 | orchestrator | 2025-08-29 15:19:43.352479 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-08-29 15:19:43.352486 | orchestrator | Friday 29 August 2025 15:15:50 +0000 (0:00:08.346) 0:05:24.029 ********* 2025-08-29 15:19:43.352492 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.352499 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.352506 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.352512 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.352519 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.352525 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.352532 | orchestrator | 2025-08-29 15:19:43.352538 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-08-29 15:19:43.352545 | orchestrator | Friday 29 August 2025 15:15:51 +0000 (0:00:01.442) 0:05:25.472 ********* 2025-08-29 15:19:43.352552 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 15:19:43.352559 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 15:19:43.352565 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 15:19:43.352572 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 15:19:43.352583 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 15:19:43.352590 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 15:19:43.352598 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.352604 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 15:19:43.352611 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.352618 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 15:19:43.352624 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 15:19:43.352631 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.352638 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 15:19:43.352644 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 15:19:43.352651 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 15:19:43.352657 | orchestrator | 2025-08-29 15:19:43.352664 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-08-29 15:19:43.352671 | orchestrator | Friday 29 August 2025 15:15:56 +0000 (0:00:04.906) 0:05:30.378 ********* 2025-08-29 15:19:43.352677 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.352684 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.352690 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.352697 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.352704 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.352710 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.352717 | orchestrator | 2025-08-29 15:19:43.352723 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-08-29 15:19:43.352730 | orchestrator | Friday 29 August 2025 15:15:57 +0000 (0:00:00.617) 0:05:30.995 ********* 2025-08-29 15:19:43.352737 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 15:19:43.352743 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 15:19:43.352750 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 15:19:43.352762 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 15:19:43.352769 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 15:19:43.352776 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 15:19:43.352783 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 15:19:43.352789 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 15:19:43.352796 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 15:19:43.352803 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 15:19:43.352809 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.352816 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 15:19:43.352822 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.352829 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 15:19:43.352835 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.352846 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:19:43.352853 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:19:43.352859 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:19:43.352866 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:19:43.352872 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:19:43.352878 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:19:43.352884 | orchestrator | 2025-08-29 15:19:43.352891 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-08-29 15:19:43.352897 | orchestrator | Friday 29 August 2025 15:16:02 +0000 (0:00:05.613) 0:05:36.609 ********* 2025-08-29 15:19:43.352903 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:19:43.352909 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:19:43.352919 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:19:43.352925 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 15:19:43.352931 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:19:43.352938 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:19:43.352944 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 15:19:43.352950 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 15:19:43.352956 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:19:43.352962 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:19:43.352969 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:19:43.352980 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:19:43.352986 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 15:19:43.352992 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.352999 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:19:43.353005 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 15:19:43.353011 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.353017 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:19:43.353024 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 15:19:43.353030 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.353036 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:19:43.353042 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:19:43.353048 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:19:43.353055 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:19:43.353061 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:19:43.353067 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:19:43.353073 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:19:43.353079 | orchestrator | 2025-08-29 15:19:43.353085 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-08-29 15:19:43.353092 | orchestrator | Friday 29 August 2025 15:16:10 +0000 (0:00:07.367) 0:05:43.976 ********* 2025-08-29 15:19:43.353098 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.353104 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.353110 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.353116 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.353122 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.353128 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.353135 | orchestrator | 2025-08-29 15:19:43.353141 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-08-29 15:19:43.353147 | orchestrator | Friday 29 August 2025 15:16:10 +0000 (0:00:00.861) 0:05:44.838 ********* 2025-08-29 15:19:43.353153 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.353159 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.353165 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.353171 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.353177 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.353184 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.353190 | orchestrator | 2025-08-29 15:19:43.353196 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-08-29 15:19:43.353203 | orchestrator | Friday 29 August 2025 15:16:11 +0000 (0:00:00.666) 0:05:45.504 ********* 2025-08-29 15:19:43.353215 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.353221 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:19:43.353227 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.353233 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.353239 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:19:43.353246 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:19:43.353252 | orchestrator | 2025-08-29 15:19:43.353258 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-08-29 15:19:43.353264 | orchestrator | Friday 29 August 2025 15:16:13 +0000 (0:00:02.075) 0:05:47.580 ********* 2025-08-29 15:19:43.353349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:19:43.353365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:19:43.353372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.353379 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.353385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:19:43.353396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:19:43.353425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.353437 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.353449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:19:43.353456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:19:43.353462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:19:43.353469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.353475 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.353486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.353498 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.353504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:19:43.353515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.353522 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.353529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:19:43.353535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:19:43.353541 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.353548 | orchestrator | 2025-08-29 15:19:43.353555 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-08-29 15:19:43.353561 | orchestrator | Friday 29 August 2025 15:16:15 +0000 (0:00:01.878) 0:05:49.458 ********* 2025-08-29 15:19:43.353567 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 15:19:43.353574 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 15:19:43.353580 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.353586 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 15:19:43.353592 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 15:19:43.353599 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.353605 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 15:19:43.353612 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 15:19:43.353618 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.353624 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 15:19:43.353630 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 15:19:43.353636 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.353642 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 15:19:43.353653 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 15:19:43.353659 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.353665 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 15:19:43.353671 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 15:19:43.353678 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.353684 | orchestrator | 2025-08-29 15:19:43.353690 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-08-29 15:19:43.353696 | orchestrator | Friday 29 August 2025 15:16:16 +0000 (0:00:00.910) 0:05:50.369 ********* 2025-08-29 15:19:43.353706 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353716 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353779 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:43.353852 | orchestrator | 2025-08-29 15:19:43.353858 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:19:43.353864 | orchestrator | Friday 29 August 2025 15:16:19 +0000 (0:00:02.808) 0:05:53.178 ********* 2025-08-29 15:19:43.353871 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.353877 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.353883 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.353889 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.353895 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.353901 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.353907 | orchestrator | 2025-08-29 15:19:43.353914 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:19:43.353920 | orchestrator | Friday 29 August 2025 15:16:20 +0000 (0:00:00.825) 0:05:54.004 ********* 2025-08-29 15:19:43.353930 | orchestrator | 2025-08-29 15:19:43.353937 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:19:43.353943 | orchestrator | Friday 29 August 2025 15:16:20 +0000 (0:00:00.153) 0:05:54.157 ********* 2025-08-29 15:19:43.353949 | orchestrator | 2025-08-29 15:19:43.353956 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:19:43.353962 | orchestrator | Friday 29 August 2025 15:16:20 +0000 (0:00:00.150) 0:05:54.308 ********* 2025-08-29 15:19:43.353968 | orchestrator | 2025-08-29 15:19:43.353974 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:19:43.353980 | orchestrator | Friday 29 August 2025 15:16:20 +0000 (0:00:00.142) 0:05:54.450 ********* 2025-08-29 15:19:43.353986 | orchestrator | 2025-08-29 15:19:43.353992 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:19:43.353998 | orchestrator | Friday 29 August 2025 15:16:20 +0000 (0:00:00.131) 0:05:54.581 ********* 2025-08-29 15:19:43.354004 | orchestrator | 2025-08-29 15:19:43.354010 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:19:43.354038 | orchestrator | Friday 29 August 2025 15:16:20 +0000 (0:00:00.139) 0:05:54.721 ********* 2025-08-29 15:19:43.354046 | orchestrator | 2025-08-29 15:19:43.354053 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-08-29 15:19:43.354059 | orchestrator | Friday 29 August 2025 15:16:21 +0000 (0:00:00.342) 0:05:55.064 ********* 2025-08-29 15:19:43.354065 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:43.354071 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:43.354078 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.354084 | orchestrator | 2025-08-29 15:19:43.354090 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-08-29 15:19:43.354096 | orchestrator | Friday 29 August 2025 15:16:31 +0000 (0:00:10.386) 0:06:05.450 ********* 2025-08-29 15:19:43.354102 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.354108 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:43.354114 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:43.354121 | orchestrator | 2025-08-29 15:19:43.354127 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-08-29 15:19:43.354137 | orchestrator | Friday 29 August 2025 15:16:50 +0000 (0:00:19.383) 0:06:24.834 ********* 2025-08-29 15:19:43.354143 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:19:43.354149 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:19:43.354155 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:19:43.354161 | orchestrator | 2025-08-29 15:19:43.354167 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-08-29 15:19:43.354174 | orchestrator | Friday 29 August 2025 15:17:16 +0000 (0:00:25.937) 0:06:50.772 ********* 2025-08-29 15:19:43.354180 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:19:43.354186 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:19:43.354192 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:19:43.354198 | orchestrator | 2025-08-29 15:19:43.354204 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-08-29 15:19:43.354210 | orchestrator | Friday 29 August 2025 15:17:56 +0000 (0:00:39.290) 0:07:30.063 ********* 2025-08-29 15:19:43.354216 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-08-29 15:19:43.354223 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:19:43.354229 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-08-29 15:19:43.354235 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:19:43.354241 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:19:43.354247 | orchestrator | 2025-08-29 15:19:43.354253 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-08-29 15:19:43.354259 | orchestrator | Friday 29 August 2025 15:18:02 +0000 (0:00:06.473) 0:07:36.537 ********* 2025-08-29 15:19:43.354269 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:19:43.354281 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:19:43.354287 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:19:43.354293 | orchestrator | 2025-08-29 15:19:43.354300 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-08-29 15:19:43.354306 | orchestrator | Friday 29 August 2025 15:18:03 +0000 (0:00:00.824) 0:07:37.361 ********* 2025-08-29 15:19:43.354312 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:19:43.354322 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:19:43.354348 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:19:43.354358 | orchestrator | 2025-08-29 15:19:43.354368 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-08-29 15:19:43.354378 | orchestrator | Friday 29 August 2025 15:18:32 +0000 (0:00:29.093) 0:08:06.454 ********* 2025-08-29 15:19:43.354388 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.354396 | orchestrator | 2025-08-29 15:19:43.354405 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-08-29 15:19:43.354414 | orchestrator | Friday 29 August 2025 15:18:32 +0000 (0:00:00.125) 0:08:06.580 ********* 2025-08-29 15:19:43.354425 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.354435 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.354445 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.354454 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.354464 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.354474 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-08-29 15:19:43.354485 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:19:43.354492 | orchestrator | 2025-08-29 15:19:43.354498 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-08-29 15:19:43.354504 | orchestrator | Friday 29 August 2025 15:18:55 +0000 (0:00:22.564) 0:08:29.145 ********* 2025-08-29 15:19:43.354510 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.354516 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.354522 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.354528 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.354534 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.354540 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.354546 | orchestrator | 2025-08-29 15:19:43.354553 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-08-29 15:19:43.354559 | orchestrator | Friday 29 August 2025 15:19:04 +0000 (0:00:09.068) 0:08:38.213 ********* 2025-08-29 15:19:43.354565 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.354571 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.354577 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.354583 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.354589 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.354595 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-08-29 15:19:43.354601 | orchestrator | 2025-08-29 15:19:43.354607 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 15:19:43.354613 | orchestrator | Friday 29 August 2025 15:19:08 +0000 (0:00:04.165) 0:08:42.379 ********* 2025-08-29 15:19:43.354619 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:19:43.354625 | orchestrator | 2025-08-29 15:19:43.354631 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 15:19:43.354637 | orchestrator | Friday 29 August 2025 15:19:20 +0000 (0:00:11.635) 0:08:54.014 ********* 2025-08-29 15:19:43.354644 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:19:43.354650 | orchestrator | 2025-08-29 15:19:43.354656 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-08-29 15:19:43.354662 | orchestrator | Friday 29 August 2025 15:19:21 +0000 (0:00:01.338) 0:08:55.353 ********* 2025-08-29 15:19:43.354668 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.354697 | orchestrator | 2025-08-29 15:19:43.354704 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-08-29 15:19:43.354710 | orchestrator | Friday 29 August 2025 15:19:22 +0000 (0:00:01.234) 0:08:56.587 ********* 2025-08-29 15:19:43.354716 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:19:43.354723 | orchestrator | 2025-08-29 15:19:43.354729 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-08-29 15:19:43.354735 | orchestrator | Friday 29 August 2025 15:19:33 +0000 (0:00:10.600) 0:09:07.187 ********* 2025-08-29 15:19:43.354754 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:19:43.354761 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:19:43.354767 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:19:43.354773 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:43.354779 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:19:43.354786 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:19:43.354792 | orchestrator | 2025-08-29 15:19:43.354798 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-08-29 15:19:43.354804 | orchestrator | 2025-08-29 15:19:43.354810 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-08-29 15:19:43.354817 | orchestrator | Friday 29 August 2025 15:19:35 +0000 (0:00:01.793) 0:09:08.981 ********* 2025-08-29 15:19:43.354823 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:43.354829 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:43.354835 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:43.354841 | orchestrator | 2025-08-29 15:19:43.354847 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-08-29 15:19:43.354853 | orchestrator | 2025-08-29 15:19:43.354859 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-08-29 15:19:43.354866 | orchestrator | Friday 29 August 2025 15:19:36 +0000 (0:00:01.285) 0:09:10.266 ********* 2025-08-29 15:19:43.354872 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.354878 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.354884 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.354890 | orchestrator | 2025-08-29 15:19:43.354896 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-08-29 15:19:43.354902 | orchestrator | 2025-08-29 15:19:43.354914 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-08-29 15:19:43.354921 | orchestrator | Friday 29 August 2025 15:19:36 +0000 (0:00:00.524) 0:09:10.790 ********* 2025-08-29 15:19:43.354927 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-08-29 15:19:43.354933 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 15:19:43.354939 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 15:19:43.354946 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-08-29 15:19:43.354952 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-08-29 15:19:43.354958 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-08-29 15:19:43.354964 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:19:43.354971 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-08-29 15:19:43.354977 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 15:19:43.354985 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 15:19:43.354995 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-08-29 15:19:43.355005 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-08-29 15:19:43.355015 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-08-29 15:19:43.355024 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:19:43.355033 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-08-29 15:19:43.355044 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 15:19:43.355052 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 15:19:43.355068 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-08-29 15:19:43.355078 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-08-29 15:19:43.355090 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-08-29 15:19:43.355100 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:19:43.355110 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-08-29 15:19:43.355121 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 15:19:43.355127 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 15:19:43.355133 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-08-29 15:19:43.355139 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-08-29 15:19:43.355145 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-08-29 15:19:43.355152 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.355158 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-08-29 15:19:43.355164 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 15:19:43.355170 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 15:19:43.355177 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-08-29 15:19:43.355183 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-08-29 15:19:43.355189 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-08-29 15:19:43.355196 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.355202 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-08-29 15:19:43.355208 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 15:19:43.355214 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 15:19:43.355220 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-08-29 15:19:43.355226 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-08-29 15:19:43.355232 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-08-29 15:19:43.355238 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.355244 | orchestrator | 2025-08-29 15:19:43.355253 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-08-29 15:19:43.355264 | orchestrator | 2025-08-29 15:19:43.355273 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-08-29 15:19:43.355283 | orchestrator | Friday 29 August 2025 15:19:38 +0000 (0:00:01.354) 0:09:12.145 ********* 2025-08-29 15:19:43.355293 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-08-29 15:19:43.355309 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-08-29 15:19:43.355320 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.355343 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-08-29 15:19:43.355355 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-08-29 15:19:43.355362 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.355368 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-08-29 15:19:43.355374 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-08-29 15:19:43.355380 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.355387 | orchestrator | 2025-08-29 15:19:43.355393 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-08-29 15:19:43.355399 | orchestrator | 2025-08-29 15:19:43.355405 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-08-29 15:19:43.355411 | orchestrator | Friday 29 August 2025 15:19:39 +0000 (0:00:00.746) 0:09:12.891 ********* 2025-08-29 15:19:43.355418 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.355424 | orchestrator | 2025-08-29 15:19:43.355430 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-08-29 15:19:43.355436 | orchestrator | 2025-08-29 15:19:43.355442 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-08-29 15:19:43.355471 | orchestrator | Friday 29 August 2025 15:19:39 +0000 (0:00:00.664) 0:09:13.556 ********* 2025-08-29 15:19:43.355482 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:43.355492 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:43.355502 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:43.355511 | orchestrator | 2025-08-29 15:19:43.355529 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:19:43.355540 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:19:43.355552 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-08-29 15:19:43.355564 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 15:19:43.355574 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 15:19:43.355584 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 15:19:43.355592 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-08-29 15:19:43.355598 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-08-29 15:19:43.355604 | orchestrator | 2025-08-29 15:19:43.355611 | orchestrator | 2025-08-29 15:19:43.355617 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:19:43.355623 | orchestrator | Friday 29 August 2025 15:19:40 +0000 (0:00:00.450) 0:09:14.006 ********* 2025-08-29 15:19:43.355630 | orchestrator | =============================================================================== 2025-08-29 15:19:43.355636 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 39.29s 2025-08-29 15:19:43.355642 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.90s 2025-08-29 15:19:43.355648 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.09s 2025-08-29 15:19:43.355657 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.94s 2025-08-29 15:19:43.355667 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.56s 2025-08-29 15:19:43.355677 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.26s 2025-08-29 15:19:43.355687 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.99s 2025-08-29 15:19:43.355697 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.38s 2025-08-29 15:19:43.355707 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.79s 2025-08-29 15:19:43.355717 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.74s 2025-08-29 15:19:43.355728 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.68s 2025-08-29 15:19:43.355737 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.10s 2025-08-29 15:19:43.355743 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.06s 2025-08-29 15:19:43.355752 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.64s 2025-08-29 15:19:43.355762 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.60s 2025-08-29 15:19:43.355772 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 10.39s 2025-08-29 15:19:43.355782 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.07s 2025-08-29 15:19:43.355817 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.91s 2025-08-29 15:19:43.355828 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.68s 2025-08-29 15:19:43.355838 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.35s 2025-08-29 15:19:43.355854 | orchestrator | 2025-08-29 15:19:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:46.382427 | orchestrator | 2025-08-29 15:19:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:49.433154 | orchestrator | 2025-08-29 15:19:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:52.469692 | orchestrator | 2025-08-29 15:19:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:55.509489 | orchestrator | 2025-08-29 15:19:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:58.549341 | orchestrator | 2025-08-29 15:19:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:01.592890 | orchestrator | 2025-08-29 15:20:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:04.637870 | orchestrator | 2025-08-29 15:20:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:07.679762 | orchestrator | 2025-08-29 15:20:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:10.729089 | orchestrator | 2025-08-29 15:20:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:13.772306 | orchestrator | 2025-08-29 15:20:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:16.814197 | orchestrator | 2025-08-29 15:20:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:19.850800 | orchestrator | 2025-08-29 15:20:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:22.896984 | orchestrator | 2025-08-29 15:20:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:25.942927 | orchestrator | 2025-08-29 15:20:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:28.981859 | orchestrator | 2025-08-29 15:20:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:32.030081 | orchestrator | 2025-08-29 15:20:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:35.071940 | orchestrator | 2025-08-29 15:20:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:38.111092 | orchestrator | 2025-08-29 15:20:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:41.155232 | orchestrator | 2025-08-29 15:20:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:44.197028 | orchestrator | 2025-08-29 15:20:44.584402 | orchestrator | 2025-08-29 15:20:44.588929 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Aug 29 15:20:44 UTC 2025 2025-08-29 15:20:44.591325 | orchestrator | 2025-08-29 15:20:44.964215 | orchestrator | ok: Runtime: 0:38:57.576970 2025-08-29 15:20:45.239207 | 2025-08-29 15:20:45.239406 | TASK [Bootstrap services] 2025-08-29 15:20:46.086197 | orchestrator | 2025-08-29 15:20:46.086352 | orchestrator | # BOOTSTRAP 2025-08-29 15:20:46.086364 | orchestrator | 2025-08-29 15:20:46.086373 | orchestrator | + set -e 2025-08-29 15:20:46.086381 | orchestrator | + echo 2025-08-29 15:20:46.086389 | orchestrator | + echo '# BOOTSTRAP' 2025-08-29 15:20:46.086400 | orchestrator | + echo 2025-08-29 15:20:46.086432 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-08-29 15:20:46.092868 | orchestrator | + set -e 2025-08-29 15:20:46.092940 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-08-29 15:20:50.979799 | orchestrator | 2025-08-29 15:20:50 | INFO  | It takes a moment until task 8253e1e0-3664-49f5-8ce1-16104e63dce7 (flavor-manager) has been started and output is visible here. 2025-08-29 15:20:55.245732 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-08-29 15:20:55.245808 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:179 │ 2025-08-29 15:20:55.245819 | orchestrator | │ in run │ 2025-08-29 15:20:55.245824 | orchestrator | │ │ 2025-08-29 15:20:55.245827 | orchestrator | │ 176 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-08-29 15:20:55.245841 | orchestrator | │ 177 │ │ 2025-08-29 15:20:55.245845 | orchestrator | │ 178 │ definitions = get_flavor_definitions(name, url) │ 2025-08-29 15:20:55.245850 | orchestrator | │ ❱ 179 │ manager = FlavorManager( │ 2025-08-29 15:20:55.245854 | orchestrator | │ 180 │ │ cloud=Cloud(cloud), definitions=definitions, recommended=recom │ 2025-08-29 15:20:55.245858 | orchestrator | │ 181 │ ) │ 2025-08-29 15:20:55.245862 | orchestrator | │ 182 │ manager.run() │ 2025-08-29 15:20:55.245866 | orchestrator | │ │ 2025-08-29 15:20:55.245870 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-08-29 15:20:55.245881 | orchestrator | │ │ cloud = 'admin' │ │ 2025-08-29 15:20:55.245885 | orchestrator | │ │ debug = False │ │ 2025-08-29 15:20:55.245889 | orchestrator | │ │ definitions = { │ │ 2025-08-29 15:20:55.245893 | orchestrator | │ │ │ 'reference': [ │ │ 2025-08-29 15:20:55.245897 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-08-29 15:20:55.245901 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-08-29 15:20:55.245905 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-08-29 15:20:55.245909 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-08-29 15:20:55.245913 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-08-29 15:20:55.245917 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-08-29 15:20:55.245920 | orchestrator | │ │ │ ], │ │ 2025-08-29 15:20:55.245924 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-08-29 15:20:55.245928 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.245932 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-08-29 15:20:55.245954 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.245958 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 15:20:55.245962 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:55.245966 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 15:20:55.245970 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.245973 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-08-29 15:20:55.245977 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-08-29 15:20:55.245981 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.245985 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.245988 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.245992 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-08-29 15:20:55.245996 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.245999 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 15:20:55.246003 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 15:20:55.246007 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 15:20:55.246046 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.246052 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-08-29 15:20:55.246056 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-08-29 15:20:55.246060 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.246064 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.246067 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.246071 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-08-29 15:20:55.246078 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.246082 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 15:20:55.246086 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:55.246089 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.246093 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.246097 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-08-29 15:20:55.246101 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-08-29 15:20:55.246104 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.246108 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.246112 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.246116 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-08-29 15:20:55.246119 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.246127 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 15:20:55.246131 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 15:20:55.246134 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.246138 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.246142 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-08-29 15:20:55.246146 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-08-29 15:20:55.246149 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.246153 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.246157 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.246160 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-08-29 15:20:55.246164 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.246168 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:55.246172 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:55.246175 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.246179 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.246183 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-08-29 15:20:55.246187 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-08-29 15:20:55.246190 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.246194 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.246198 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.246202 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-08-29 15:20:55.246206 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.246227 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:55.246236 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 15:20:55.246249 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.279566 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.279636 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-08-29 15:20:55.279641 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-08-29 15:20:55.279646 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.279650 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.279654 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.279658 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-08-29 15:20:55.279662 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.279684 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 15:20:55.279690 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:55.279694 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.279699 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.279702 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-08-29 15:20:55.279706 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-08-29 15:20:55.279710 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.279714 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.279718 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.279721 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-08-29 15:20:55.279725 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.279729 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 15:20:55.279733 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-08-29 15:20:55.279736 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.279740 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.279744 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-08-29 15:20:55.279748 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-08-29 15:20:55.279752 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.279756 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.279759 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.279763 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-08-29 15:20:55.279767 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 15:20:55.279773 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:55.279777 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:55.279780 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.279784 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.279788 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-08-29 15:20:55.279792 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-08-29 15:20:55.279796 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.279808 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.279812 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.279815 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-08-29 15:20:55.279819 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 15:20:55.279826 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:55.279830 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 15:20:55.279845 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.279849 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.279853 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-08-29 15:20:55.279857 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-08-29 15:20:55.279861 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.279864 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.279868 | orchestrator | │ │ │ │ ... +19 │ │ 2025-08-29 15:20:55.279872 | orchestrator | │ │ │ ] │ │ 2025-08-29 15:20:55.279876 | orchestrator | │ │ } │ │ 2025-08-29 15:20:55.279880 | orchestrator | │ │ level = 'INFO' │ │ 2025-08-29 15:20:55.279883 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-08-29 15:20:55.279887 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-08-29 15:20:55.279891 | orchestrator | │ │ name = 'local' │ │ 2025-08-29 15:20:55.279894 | orchestrator | │ │ recommended = True │ │ 2025-08-29 15:20:55.279898 | orchestrator | │ │ url = None │ │ 2025-08-29 15:20:55.279902 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-08-29 15:20:55.279908 | orchestrator | │ │ 2025-08-29 15:20:55.279912 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:97 │ 2025-08-29 15:20:55.279916 | orchestrator | │ in __init__ │ 2025-08-29 15:20:55.279919 | orchestrator | │ │ 2025-08-29 15:20:55.279923 | orchestrator | │ 94 │ │ self.required_flavors = definitions["mandatory"] │ 2025-08-29 15:20:55.279927 | orchestrator | │ 95 │ │ self.cloud = cloud │ 2025-08-29 15:20:55.279930 | orchestrator | │ 96 │ │ if recommended: │ 2025-08-29 15:20:55.279934 | orchestrator | │ ❱ 97 │ │ │ self.required_flavors = self.required_flavors + definition │ 2025-08-29 15:20:55.279938 | orchestrator | │ 98 │ │ │ 2025-08-29 15:20:55.279941 | orchestrator | │ 99 │ │ self.defaults_dict = {} │ 2025-08-29 15:20:55.279945 | orchestrator | │ 100 │ │ for item in definitions["reference"]: │ 2025-08-29 15:20:55.279949 | orchestrator | │ │ 2025-08-29 15:20:55.279956 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-08-29 15:20:55.279962 | orchestrator | │ │ cloud = │ │ 2025-08-29 15:20:55.279972 | orchestrator | │ │ definitions = { │ │ 2025-08-29 15:20:55.279976 | orchestrator | │ │ │ 'reference': [ │ │ 2025-08-29 15:20:55.279980 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-08-29 15:20:55.279983 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-08-29 15:20:55.279987 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-08-29 15:20:55.279991 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-08-29 15:20:55.279995 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-08-29 15:20:55.279999 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-08-29 15:20:55.280002 | orchestrator | │ │ │ ], │ │ 2025-08-29 15:20:55.280006 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-08-29 15:20:55.280010 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.280016 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-08-29 15:20:55.312785 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.312862 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 15:20:55.312871 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:55.312878 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 15:20:55.312884 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.312890 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-08-29 15:20:55.312900 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-08-29 15:20:55.312907 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.312911 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.312916 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.312920 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-08-29 15:20:55.312924 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.312927 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 15:20:55.312931 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 15:20:55.312935 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 15:20:55.312938 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.312942 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-08-29 15:20:55.312946 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-08-29 15:20:55.312950 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.312954 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.312978 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.312982 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-08-29 15:20:55.312986 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.312990 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 15:20:55.312993 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:55.312997 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.313001 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.313005 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-08-29 15:20:55.313008 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-08-29 15:20:55.313012 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.313026 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.313031 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.313035 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-08-29 15:20:55.313039 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.313042 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 15:20:55.313046 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 15:20:55.313050 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.313053 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.313057 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-08-29 15:20:55.313061 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-08-29 15:20:55.313065 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.313068 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.313072 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.313088 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-08-29 15:20:55.313093 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.313096 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:55.313100 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:55.313104 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.313108 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.313111 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-08-29 15:20:55.313115 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-08-29 15:20:55.313120 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.313127 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.313137 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.313143 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-08-29 15:20:55.313150 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.313156 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:55.313162 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 15:20:55.313167 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.313174 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.313179 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-08-29 15:20:55.313189 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-08-29 15:20:55.313194 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.313201 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.313207 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.313213 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-08-29 15:20:55.313219 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.313226 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 15:20:55.313231 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:55.313237 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.313243 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.313251 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-08-29 15:20:55.313254 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-08-29 15:20:55.313258 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.313262 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.313266 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.313270 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-08-29 15:20:55.313274 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:55.313277 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 15:20:55.313281 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-08-29 15:20:55.313285 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.313288 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.313292 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-08-29 15:20:55.313296 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-08-29 15:20:55.313299 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.313303 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.313315 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.367968 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-08-29 15:20:55.368039 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 15:20:55.368045 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:55.368050 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:55.368054 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.368058 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.368061 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-08-29 15:20:55.368065 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-08-29 15:20:55.368069 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.368072 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.368076 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:55.368080 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-08-29 15:20:55.368084 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 15:20:55.368087 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:55.368091 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 15:20:55.368095 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:55.368099 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:55.368102 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-08-29 15:20:55.368106 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-08-29 15:20:55.368110 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:55.368113 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:55.368117 | orchestrator | │ │ │ │ ... +19 │ │ 2025-08-29 15:20:55.368121 | orchestrator | │ │ │ ] │ │ 2025-08-29 15:20:55.368125 | orchestrator | │ │ } │ │ 2025-08-29 15:20:55.368129 | orchestrator | │ │ recommended = True │ │ 2025-08-29 15:20:55.368133 | orchestrator | │ │ self = │ │ 2025-08-29 15:20:55.368141 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-08-29 15:20:55.368149 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-08-29 15:20:55.368153 | orchestrator | KeyError: 'recommended' 2025-08-29 15:20:55.866559 | orchestrator | ERROR 2025-08-29 15:20:55.866915 | orchestrator | { 2025-08-29 15:20:55.866988 | orchestrator | "delta": "0:00:10.049274", 2025-08-29 15:20:55.867031 | orchestrator | "end": "2025-08-29 15:20:55.718857", 2025-08-29 15:20:55.867068 | orchestrator | "msg": "non-zero return code", 2025-08-29 15:20:55.867103 | orchestrator | "rc": 1, 2025-08-29 15:20:55.867136 | orchestrator | "start": "2025-08-29 15:20:45.669583" 2025-08-29 15:20:55.867169 | orchestrator | } failure 2025-08-29 15:20:55.893657 | 2025-08-29 15:20:55.893866 | PLAY RECAP 2025-08-29 15:20:55.893976 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-08-29 15:20:55.894028 | 2025-08-29 15:20:56.141497 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-08-29 15:20:56.143949 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 15:20:56.957806 | 2025-08-29 15:20:56.957987 | PLAY [Post output play] 2025-08-29 15:20:56.974350 | 2025-08-29 15:20:56.974518 | LOOP [stage-output : Register sources] 2025-08-29 15:20:57.045576 | 2025-08-29 15:20:57.045940 | TASK [stage-output : Check sudo] 2025-08-29 15:20:57.929128 | orchestrator | sudo: a password is required 2025-08-29 15:20:58.087924 | orchestrator | ok: Runtime: 0:00:00.015989 2025-08-29 15:20:58.095147 | 2025-08-29 15:20:58.095260 | LOOP [stage-output : Set source and destination for files and folders] 2025-08-29 15:20:58.127164 | 2025-08-29 15:20:58.127351 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-08-29 15:20:58.193096 | orchestrator | ok 2025-08-29 15:20:58.201329 | 2025-08-29 15:20:58.201502 | LOOP [stage-output : Ensure target folders exist] 2025-08-29 15:20:58.675278 | orchestrator | ok: "docs" 2025-08-29 15:20:58.675664 | 2025-08-29 15:20:58.927562 | orchestrator | ok: "artifacts" 2025-08-29 15:20:59.176610 | orchestrator | ok: "logs" 2025-08-29 15:20:59.195214 | 2025-08-29 15:20:59.195386 | LOOP [stage-output : Copy files and folders to staging folder] 2025-08-29 15:20:59.235843 | 2025-08-29 15:20:59.236168 | TASK [stage-output : Make all log files readable] 2025-08-29 15:20:59.521640 | orchestrator | ok 2025-08-29 15:20:59.530976 | 2025-08-29 15:20:59.531111 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-08-29 15:20:59.565657 | orchestrator | skipping: Conditional result was False 2025-08-29 15:20:59.581651 | 2025-08-29 15:20:59.581787 | TASK [stage-output : Discover log files for compression] 2025-08-29 15:20:59.606670 | orchestrator | skipping: Conditional result was False 2025-08-29 15:20:59.618674 | 2025-08-29 15:20:59.618814 | LOOP [stage-output : Archive everything from logs] 2025-08-29 15:20:59.664451 | 2025-08-29 15:20:59.664627 | PLAY [Post cleanup play] 2025-08-29 15:20:59.673071 | 2025-08-29 15:20:59.673175 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 15:20:59.741111 | orchestrator | ok 2025-08-29 15:20:59.753312 | 2025-08-29 15:20:59.753449 | TASK [Set cloud fact (local deployment)] 2025-08-29 15:20:59.798696 | orchestrator | skipping: Conditional result was False 2025-08-29 15:20:59.814928 | 2025-08-29 15:20:59.815073 | TASK [Clean the cloud environment] 2025-08-29 15:21:03.324853 | orchestrator | 2025-08-29 15:21:03 - clean up servers 2025-08-29 15:21:04.246276 | orchestrator | 2025-08-29 15:21:04 - testbed-manager 2025-08-29 15:21:04.339715 | orchestrator | 2025-08-29 15:21:04 - testbed-node-3 2025-08-29 15:21:04.462159 | orchestrator | 2025-08-29 15:21:04 - testbed-node-0 2025-08-29 15:21:04.553214 | orchestrator | 2025-08-29 15:21:04 - testbed-node-4 2025-08-29 15:21:04.642823 | orchestrator | 2025-08-29 15:21:04 - testbed-node-5 2025-08-29 15:21:04.740066 | orchestrator | 2025-08-29 15:21:04 - testbed-node-1 2025-08-29 15:21:04.851320 | orchestrator | 2025-08-29 15:21:04 - testbed-node-2 2025-08-29 15:21:04.961438 | orchestrator | 2025-08-29 15:21:04 - clean up keypairs 2025-08-29 15:21:04.980881 | orchestrator | 2025-08-29 15:21:04 - testbed 2025-08-29 15:21:05.008371 | orchestrator | 2025-08-29 15:21:05 - wait for servers to be gone 2025-08-29 15:21:15.873184 | orchestrator | 2025-08-29 15:21:15 - clean up ports 2025-08-29 15:21:16.079733 | orchestrator | 2025-08-29 15:21:16 - 0b521db5-cace-41f1-8115-6fc0bfe81c3a 2025-08-29 15:21:16.331153 | orchestrator | 2025-08-29 15:21:16 - 3419a9f7-c7ba-45d2-84e8-b0ddfd7cecd8 2025-08-29 15:21:16.569389 | orchestrator | 2025-08-29 15:21:16 - 7aefe871-28d4-4b1c-b6b7-51a572a1f5dc 2025-08-29 15:21:16.771000 | orchestrator | 2025-08-29 15:21:16 - 816dac77-4d1f-4f5d-a1f7-96e54de668df 2025-08-29 15:21:17.163290 | orchestrator | 2025-08-29 15:21:17 - 82936777-00bc-48e0-8b3b-3b850b053147 2025-08-29 15:21:17.475266 | orchestrator | 2025-08-29 15:21:17 - d3a7c8ac-9b74-41e3-97b0-4592185ab5f0 2025-08-29 15:21:17.681918 | orchestrator | 2025-08-29 15:21:17 - d8918caf-073c-4f7d-ba1b-0be702b30985 2025-08-29 15:21:17.886181 | orchestrator | 2025-08-29 15:21:17 - clean up volumes 2025-08-29 15:21:18.009036 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-1-node-base 2025-08-29 15:21:18.057169 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-0-node-base 2025-08-29 15:21:18.100669 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-4-node-base 2025-08-29 15:21:18.145079 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-3-node-base 2025-08-29 15:21:18.186055 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-2-node-base 2025-08-29 15:21:18.226781 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-5-node-base 2025-08-29 15:21:18.267368 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-1-node-4 2025-08-29 15:21:18.309848 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-5-node-5 2025-08-29 15:21:18.354100 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-2-node-5 2025-08-29 15:21:18.398809 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-4-node-4 2025-08-29 15:21:18.441604 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-3-node-3 2025-08-29 15:21:18.489192 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-0-node-3 2025-08-29 15:21:18.532142 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-6-node-3 2025-08-29 15:21:18.575730 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-8-node-5 2025-08-29 15:21:18.620244 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-manager-base 2025-08-29 15:21:18.663688 | orchestrator | 2025-08-29 15:21:18 - testbed-volume-7-node-4 2025-08-29 15:21:18.701700 | orchestrator | 2025-08-29 15:21:18 - disconnect routers 2025-08-29 15:21:18.814372 | orchestrator | 2025-08-29 15:21:18 - testbed 2025-08-29 15:21:19.777547 | orchestrator | 2025-08-29 15:21:19 - clean up subnets 2025-08-29 15:21:19.829712 | orchestrator | 2025-08-29 15:21:19 - subnet-testbed-management 2025-08-29 15:21:19.993804 | orchestrator | 2025-08-29 15:21:19 - clean up networks 2025-08-29 15:21:20.137708 | orchestrator | 2025-08-29 15:21:20 - net-testbed-management 2025-08-29 15:21:20.418345 | orchestrator | 2025-08-29 15:21:20 - clean up security groups 2025-08-29 15:21:20.466850 | orchestrator | 2025-08-29 15:21:20 - testbed-node 2025-08-29 15:21:20.580052 | orchestrator | 2025-08-29 15:21:20 - testbed-management 2025-08-29 15:21:20.687223 | orchestrator | 2025-08-29 15:21:20 - clean up floating ips 2025-08-29 15:21:20.718333 | orchestrator | 2025-08-29 15:21:20 - 81.163.192.249 2025-08-29 15:21:21.115151 | orchestrator | 2025-08-29 15:21:21 - clean up routers 2025-08-29 15:21:21.213094 | orchestrator | 2025-08-29 15:21:21 - testbed 2025-08-29 15:21:22.383174 | orchestrator | ok: Runtime: 0:00:21.902194 2025-08-29 15:21:22.387574 | 2025-08-29 15:21:22.387733 | PLAY RECAP 2025-08-29 15:21:22.387871 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-08-29 15:21:22.387938 | 2025-08-29 15:21:22.507664 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 15:21:22.509909 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 15:21:23.166301 | 2025-08-29 15:21:23.166444 | PLAY [Cleanup play] 2025-08-29 15:21:23.181675 | 2025-08-29 15:21:23.181781 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 15:21:23.231270 | orchestrator | ok 2025-08-29 15:21:23.237666 | 2025-08-29 15:21:23.237763 | TASK [Set cloud fact (local deployment)] 2025-08-29 15:21:23.270903 | orchestrator | skipping: Conditional result was False 2025-08-29 15:21:23.279323 | 2025-08-29 15:21:23.279457 | TASK [Clean the cloud environment] 2025-08-29 15:21:24.359941 | orchestrator | 2025-08-29 15:21:24 - clean up servers 2025-08-29 15:21:24.834718 | orchestrator | 2025-08-29 15:21:24 - clean up keypairs 2025-08-29 15:21:24.850144 | orchestrator | 2025-08-29 15:21:24 - wait for servers to be gone 2025-08-29 15:21:24.888870 | orchestrator | 2025-08-29 15:21:24 - clean up ports 2025-08-29 15:21:24.963259 | orchestrator | 2025-08-29 15:21:24 - clean up volumes 2025-08-29 15:21:25.025455 | orchestrator | 2025-08-29 15:21:25 - disconnect routers 2025-08-29 15:21:25.046319 | orchestrator | 2025-08-29 15:21:25 - clean up subnets 2025-08-29 15:21:25.069095 | orchestrator | 2025-08-29 15:21:25 - clean up networks 2025-08-29 15:21:25.222043 | orchestrator | 2025-08-29 15:21:25 - clean up security groups 2025-08-29 15:21:25.260632 | orchestrator | 2025-08-29 15:21:25 - clean up floating ips 2025-08-29 15:21:25.287314 | orchestrator | 2025-08-29 15:21:25 - clean up routers 2025-08-29 15:21:25.826557 | orchestrator | ok: Runtime: 0:00:01.353015 2025-08-29 15:21:25.829835 | 2025-08-29 15:21:25.829993 | PLAY RECAP 2025-08-29 15:21:25.830117 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-08-29 15:21:25.830178 | 2025-08-29 15:21:25.925976 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 15:21:25.927997 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 15:21:26.594303 | 2025-08-29 15:21:26.594462 | PLAY [Base post-fetch] 2025-08-29 15:21:26.608227 | 2025-08-29 15:21:26.608329 | TASK [fetch-output : Set log path for multiple nodes] 2025-08-29 15:21:26.652399 | orchestrator | skipping: Conditional result was False 2025-08-29 15:21:26.658645 | 2025-08-29 15:21:26.658759 | TASK [fetch-output : Set log path for single node] 2025-08-29 15:21:26.699863 | orchestrator | ok 2025-08-29 15:21:26.707729 | 2025-08-29 15:21:26.707823 | LOOP [fetch-output : Ensure local output dirs] 2025-08-29 15:21:27.130078 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/adc95f4e315d487b829a740e42876478/work/logs" 2025-08-29 15:21:27.352306 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/adc95f4e315d487b829a740e42876478/work/artifacts" 2025-08-29 15:21:27.592111 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/adc95f4e315d487b829a740e42876478/work/docs" 2025-08-29 15:21:27.611854 | 2025-08-29 15:21:27.612019 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-08-29 15:21:28.466311 | orchestrator | changed: .d..t...... ./ 2025-08-29 15:21:28.466609 | orchestrator | changed: All items complete 2025-08-29 15:21:28.466652 | 2025-08-29 15:21:29.188774 | orchestrator | changed: .d..t...... ./ 2025-08-29 15:21:29.911869 | orchestrator | changed: .d..t...... ./ 2025-08-29 15:21:29.937515 | 2025-08-29 15:21:29.937637 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-08-29 15:21:29.964207 | orchestrator | skipping: Conditional result was False 2025-08-29 15:21:29.971183 | orchestrator | skipping: Conditional result was False 2025-08-29 15:21:29.988813 | 2025-08-29 15:21:29.988889 | PLAY RECAP 2025-08-29 15:21:29.988941 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-08-29 15:21:29.988968 | 2025-08-29 15:21:30.078503 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 15:21:30.079705 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 15:21:30.733700 | 2025-08-29 15:21:30.733834 | PLAY [Base post] 2025-08-29 15:21:30.746792 | 2025-08-29 15:21:30.746931 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-08-29 15:21:31.689284 | orchestrator | changed 2025-08-29 15:21:31.696556 | 2025-08-29 15:21:31.696673 | PLAY RECAP 2025-08-29 15:21:31.696741 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-08-29 15:21:31.696806 | 2025-08-29 15:21:31.834462 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 15:21:31.835572 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-08-29 15:21:32.670277 | 2025-08-29 15:21:32.670497 | PLAY [Base post-logs] 2025-08-29 15:21:32.681951 | 2025-08-29 15:21:32.682094 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-08-29 15:21:33.229043 | localhost | changed 2025-08-29 15:21:33.240867 | 2025-08-29 15:21:33.241051 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-08-29 15:21:33.271714 | localhost | ok 2025-08-29 15:21:33.279122 | 2025-08-29 15:21:33.279307 | TASK [Set zuul-log-path fact] 2025-08-29 15:21:33.308792 | localhost | ok 2025-08-29 15:21:33.318887 | 2025-08-29 15:21:33.319006 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 15:21:33.347443 | localhost | ok 2025-08-29 15:21:33.354099 | 2025-08-29 15:21:33.354279 | TASK [upload-logs : Create log directories] 2025-08-29 15:21:33.899896 | localhost | changed 2025-08-29 15:21:33.903022 | 2025-08-29 15:21:33.903134 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-08-29 15:21:34.433729 | localhost -> localhost | ok: Runtime: 0:00:00.011583 2025-08-29 15:21:34.438494 | 2025-08-29 15:21:34.438618 | TASK [upload-logs : Upload logs to log server] 2025-08-29 15:21:35.023475 | localhost | Output suppressed because no_log was given 2025-08-29 15:21:35.025907 | 2025-08-29 15:21:35.026039 | LOOP [upload-logs : Compress console log and json output] 2025-08-29 15:21:35.085244 | localhost | skipping: Conditional result was False 2025-08-29 15:21:35.090544 | localhost | skipping: Conditional result was False 2025-08-29 15:21:35.097788 | 2025-08-29 15:21:35.097985 | LOOP [upload-logs : Upload compressed console log and json output] 2025-08-29 15:21:35.149518 | localhost | skipping: Conditional result was False 2025-08-29 15:21:35.150132 | 2025-08-29 15:21:35.153620 | localhost | skipping: Conditional result was False 2025-08-29 15:21:35.160744 | 2025-08-29 15:21:35.160949 | LOOP [upload-logs : Upload console log and json output]